3D reconstruction from multiple images

Sorry if it’s a trivial question but I’am a beginner.

I am working on 3D reconstruction from multi-images project.

I am working with these datasets.

There is a par.txt file including the camera intrinsic and extrinsic parameters. As far as I know the extrinsic parameter contains the translation and rotation of the camera coordinate W.R.T. the world coordinate.

How can I know where the world coordinate origin is?

How to do 3D Reconstruction with Photogrammetry?

I recently started experimenting with Photogrammetry and I was wondering what it would take to create a own 3d reconstruction software?

The goal would be to take a set of input images of a real world scene and reconstruct the geometry into a 3d mesh.

In my mind the process would probably look something like this:

  1. Analyze input images for similar features
  2. Generate a 3d point cloud from camera movement
  3. Turn point cloud into a 3d mesh
  4. Generate texture maps for the mesh

I am new to the topic and thus I was wondering if someone could provide resources / ideas on how to approach this endeavor?

How does image reconstruction take place in neural network?

I am reading through and thinking about how neural network works and have been reading about convolutional neural networks (CNN). I am particularly interested in image filtering (or enhancing) using CNN. The thing that confuses me is, how exactly does CNN produce the output, filtered/enhanced image? From what I understand each layer convolves the previous input into more distinctive features, so aren’t we essentially losing details? How would the network then know that, for example, this noisy image A, should be cleaned up here and there as such trained to produce a clean image B?

What are the difference and relation between type checking and type reconstruction?

In Types and Programming Languages by Pierce,

ML-style let-polymorphism was first described by Milner (1978). A num- ber of type reconstruction algorithms have been proposed, notably the clas- sic Algorithm W (Damas and Milner) of Damas and Milner (1982; also see Lee and Yi, 1998). The main difference between Algorithm W and the pre- sentation in this chapter is that the former is specialized for “pure type reconstruction”—assigning principal types to completely untyped lambda- terms—while we have mixed type checking and type reconstruction, permit- ting terms to include explicit type annotations that may, but need not, contain variables. This makes our technical presentation a bit more involved (espe- cially the proof of completeness, Theorem 22.3.7, where we must be careful to keep the programmer’s type variables separate from the ones introduced by the constraint generation rules), but it meshes better with the style of the other chapters.

What are the difference and relation between type checking and type reconstruction?

Does type checking apply only to terms with explicit type annotations which don’t contain type variables?

Does type reconstruction apply only to terms either without explicit type annotations, or with explicit type annotations containing type variables?

Thanks.

APSP on GPGPU with paths reconstruction

There is a huge number of works on efficient solution of all-pairs shortest paths (APSP) by using GPGPU. But the main goal of these works is to compute the length of the shortest path. Are there exist works which investigate (or propose) efficient ways to get not only length but also the path (as a sequence of edges, for example) for real-world sparse graphs by using GPGPU?

Efficient Algorithms for Destroyed Document Reconstruction

I am not certain this is the proper site for this question however I am mainly looking for resources on this topic (perhaps code). I was watching TV and one of the characters had a lawyer who destroyed his documents using a paper shredder. A lab tech said that the shredder was special.

I am not familiar with this area of computer science/ mathematics but I am looking for information on efficient algorithms to reconstruct destroyed documents. I can come up with a naive approach that is brute force fairly easily I imagine but just going through all the pieces and looking for edges that are the same but this doesn’t sound feasible as the number of combinations will explode.

Note: By destroyed documents I am talking about taking a document (printed out) and then shredding it into small pieces and reassembling it by determining which pieces fit together.