3D scanning
3D scanning is the process of analyzing a real-world object or environment to collect three dimensional data of its shape and possibly its appearance (e.g. color). The collected data can then be used to construct digital
A 3D scanner can be based on many different technologies, each with its own limitations, advantages and costs. Many limitations in the kind of objects that can be
Collected 3D data is useful for a wide variety of applications. These devices are used extensively by the entertainment industry in the production of movies and video games, including
Functionality
The purpose of a 3D scanner is usually to create a
3D scanners share several traits with cameras. Like most cameras, they have a cone-like field of view, and like cameras, they can only collect information about surfaces that are not obscured. While a camera collects colour information about surfaces within its field of view, a 3D scanner collects distance information about surfaces within its field of view. The "picture" produced by a 3D scanner describes the distance to a surface at each point in the picture. This allows the three dimensional position of each point in the picture to be identified.
In some situations, a single scan will not produce a complete model of the subject. Multiple scans, from different directions are usually helpful to obtain information about all sides of the subject. These scans have to be brought into a common reference system, a process that is usually called alignment or registration, and then merged to create a complete 3D model. This whole process, going from the single range map to the whole model, is usually known as the 3D scanning pipeline.[8][9][10][11][12]
Technology
There are a variety of technologies for digitally acquiring the shape of a 3D object. The techniques work with most or all sensor types including optical, acoustic, laser scanning,[13] radar, thermal,[14] and seismic.[15][16] A well established classification[17] divides them into two types: contact and non-contact. Non-contact solutions can be further divided into two main categories, active and passive. There are a variety of technologies that fall under each of these categories.
Contact
Contact 3D scanners work by physically probing (touching) the part and recording the position of the sensor as the probe moves around the part.
There are two main types of contact 3D scanners:
- Coordinate measuring machines (CMMs) which traditionally have 3 perpendicular moving axis with a touch probe mounted on the Z axis. As the touch probe moves around the part, sensors on each axis record the position to generate XYZ coordinates. Modern CMMs are 5 axis systems, with the two extra axes provided by pivoting sensor heads. CMMs are the most accurate form of 3D measurement achieving micron precision. The greatest advantage of a CMM after accuracy is that it can be run in autonomous (CNC) mode or as a manual probing system. The disadvantage of CMMs is that their upfront cost and the technical knowledge required to operate them.
- Articulated Arms which generally have multiple segments with polar sensors on each joint. As per the CMM, as the articulated arm moves around the part sensors record their position and the location of the end of the arm is calculated using complex math and the wrist rotation angle and hinge angle of each joint. While not usually as accurate as CMMs, articulated arms still achieve high accuracy and are cheaper and slightly easier to use. They do not usually have CNC options.
Both modern CMMs and Articulated Arms can also be fitted with non-contact laser scanners instead of touch probes.
Non-contact active
Active scanners emit some kind of radiation or light and detect its reflection or radiation passing through object in order to probe an object or environment. Possible types of emissions used include light,
Time-of-flight
The time-of-flight 3D laser scanner is an active scanner that uses laser light to probe the subject. At the heart of this type of scanner is a time-of-flight
The laser range finder only detects the distance of one point in its direction of view. Thus, the scanner scans its entire field of view one point at a time by changing the range finder's direction of view to scan different points. The view direction of the laser range finder can be changed either by rotating the range finder itself, or by using a system of rotating mirrors. The latter method is commonly used because mirrors are much lighter and can thus be rotated much faster and with greater accuracy. Typical time-of-flight 3D laser scanners can measure the distance of 10,000~100,000 points every second.
Time-of-flight devices are also available in a 2D configuration. This is referred to as a time-of-flight camera.[18]
Triangulation
Triangulation based 3D laser scanners are also active scanners that use laser light to probe the environment. With respect to time-of-flight 3D laser scanner the triangulation laser shines a laser on the subject and exploits a camera to look for the location of the laser dot. Depending on how far away the laser strikes a surface, the laser dot appears at different places in the camera's field of view. This technique is called triangulation because the laser dot, the camera and the laser emitter form a triangle. The length of one side of the triangle, the distance between the camera and the laser emitter is known. The angle of the laser emitter corner is also known. The angle of the camera corner can be determined by looking at the location of the laser dot in the camera's field of view. These three pieces of information fully determine the shape and size of the triangle and give the location of the laser dot corner of the triangle.[19] In most cases a laser stripe, instead of a single laser dot, is swept across the object to speed up the acquisition process. The use of triangulation to measure distances dates to antiquity.
Strengths and weaknesses
Time-of-flight range finders are capable of operating over long distances on the order of kilometres. These scanners are thus suitable for scanning large structures like buildings or geographic features. A disadvantage is that, due to the high speed of light, measuring the round-trip time is difficult and so the accuracy of the distance measurement is relatively low, on the order of millimetres.
Triangulation range finders, on the other hand, have a range of usually limited to a few meters for reasonably sized devices, but their accuracy is relatively high. The accuracy of triangulation range finders is on the order of tens of
Time-of-flight scanners' accuracy can be lost when the laser hits the edge of an object because the information that is sent back to the scanner is from two different locations for one laser pulse. The coordinate relative to the scanner's position for a point that has hit the edge of an object will be calculated based on an average and therefore will put the point in the wrong place. When using a high resolution scan on an object the chances of the beam hitting an edge are increased and the resulting data will show noise just behind the edges of the object. Scanners with a smaller beam width will help to solve this problem but will be limited by range as the beam width will increase over distance. Software can also help by determining that the first object to be hit by the laser beam should cancel out the second.
At a rate of 10,000 sample points per second, low resolution scans can take less than a second, but high resolution scans, requiring millions of samples, can take minutes for some time-of-flight scanners. The problem this creates is distortion from motion. Since each point is sampled at a different time, any motion in the subject or the scanner will distort the collected data. Thus, it is usually necessary to mount both the subject and the scanner on stable platforms and minimise vibration. Using these scanners to scan objects in motion is very difficult.
Recently, there has been research on compensating for distortion from small amounts of vibration[20] and distortions due to motion and/or rotation.[21]
Short-range laser scanners can not usually encompass a depth of field more than 1 meter.[22] When scanning in one position for any length of time slight movement can occur in the scanner position due to changes in temperature. If the scanner is set on a tripod and there is strong sunlight on one side of the scanner then that side of the tripod will expand and slowly distort the scan data from one side to another. Some laser scanners have level compensators built into them to counteract any movement of the scanner during the scan process.
Conoscopic holography
In a
Hand-held laser scanners
Hand-held laser scanners create a 3D image through the triangulation mechanism described above: a laser dot or line is projected onto an object from a hand-held device and a sensor (typically a
Data is collected by a computer and recorded as data points within
Structured light
Structured-light 3D scanners project a pattern of light on the subject and look at the deformation of the pattern on the subject. The pattern is projected onto the subject using either an LCD projector or other stable light source. A camera, offset slightly from the pattern projector, looks at the shape of the pattern and calculates the distance of every point in the field of view.
Structured-light scanning is still a very active area of research with many research papers published each year. Perfect maps have also been proven useful as structured light patterns that solve the correspondence problem and allow for error detection and error correction.[27]
The advantage of structured-light 3D scanners is speed and precision. Instead of scanning one point at a time, structured light scanners scan multiple points or the entire field of view at once. Scanning an entire field of view in a fraction of a second reduces or eliminates the problem of distortion from motion. Some existing systems are capable of scanning moving objects in real-time.
A real-time scanner using digital fringe projection and phase-shifting technique (certain kinds of structured light methods) was developed, to capture, reconstruct, and render high-density details of dynamically deformable objects (such as facial expressions) at 40 frames per second.[28] Recently, another scanner has been developed. Different patterns can be applied to this system, and the frame rate for capturing and data processing achieves 120 frames per second. It can also scan isolated surfaces, for example two moving hands.[29] By utilising the binary defocusing technique, speed breakthroughs have been made that could reach hundreds[30] to thousands of frames per second.[31]
Modulated light
Modulated light 3D scanners shine a continually changing light at the subject. Usually the light source simply cycles its amplitude in a
Volumetric techniques
Medical
Industrial
Although most common in medicine,
Non-contact passive
Passive 3D imaging solutions do not emit any kind of radiation themselves, but instead rely on detecting reflected ambient radiation. Most solutions of this type detect visible light because it is a readily available ambient radiation. Other types of radiation, such as infrared could also be used. Passive methods can be very cheap, because in most cases they do not need particular hardware but simple digital cameras.
- Stereoscopic systems usually employ two video cameras, slightly apart, looking at the same scene. By analysing the slight differences between the images seen by each camera, it is possible to determine the distance at each point in the images. This method is based on the same principles driving human stereoscopic vision.[32]
- Photometricsystems usually use a single camera, but take multiple images under varying lighting conditions. These techniques attempt to invert the image formation model in order to recover the surface orientation at each pixel.
- Silhouette techniques use outlines created from a sequence of photographs around a three-dimensional object against a well contrasted background. These silhouettes are extruded and intersected to form the visual hull approximation of the object. With these approaches some concavities of an object (like the interior of a bowl) cannot be detected.
Photogrammetric non-contact passive methods
This section needs expansion. You can help by adding to it. (March 2020) |
- Close range photogrammetry typically uses a handheld camera such as a building facade, vehicles, sculptures, rocks, and shoes.
- Camera Arrays can be used to generate 3D point clouds or meshes of live objects such as people or pets by synchronizing multiple cameras to photograph a subject from multiple perspectives at the same time for 3D object reconstruction.[35]
- Wide angle photogrammetry can be used to capture the interior of buildings or enclosed spaces using a 360 camera.
- Aerial photogrammetry uses aerial images acquired by satellite, commercial aircraft or UAV drone to collect images of buildings, structures and terrain for 3D reconstruction into a point cloud or mesh.
Acquisition from acquired sensor data
Semi-automatic building extraction from lidar data and high-resolution images is also a possibility. Again, this approach allows modelling without physically moving towards the location or object.[36] From airborne lidar data, digital surface model (DSM) can be generated and then the objects higher than the ground are automatically detected from the DSM. Based on general knowledge about buildings, geometric characteristics such as size, height and shape information are then used to separate the buildings from other objects. The extracted building outlines are then simplified using an orthogonal algorithm to obtain better cartographic quality. Watershed analysis can be conducted to extract the ridgelines of building roofs. The ridgelines as well as slope information are used to classify the buildings per type. The buildings are then reconstructed using three parametric building models (flat, gabled, hipped).[37]
Acquisition from on-site sensors
Lidar and other terrestrial laser scanning technology[38] offers the fastest, automated way to collect height or distance information. lidar or laser for height measurement of buildings is becoming very promising.[39] Commercial applications of both airborne lidar and ground laser scanning technology have proven to be fast and accurate methods for building height extraction. The building extraction task is needed to determine building locations, ground elevation, orientations, building size, rooftop heights, etc. Most buildings are described to sufficient details in terms of general polyhedra, i.e., their boundaries can be represented by a set of planar surfaces and straight lines. Further processing such as expressing building footprints as polygons is used for data storing in GIS databases.
Using laser scans and images taken from ground level and a bird's-eye perspective, Fruh and Zakhor present an approach to automatically create textured 3D city models. This approach involves registering and merging the detailed facade models with a complementary airborne model. The airborne modeling process generates a half-meter resolution model with a bird's-eye view of the entire area, containing terrain profile and building tops. Ground-based modeling process results in a detailed model of the building facades. Using the DSM obtained from airborne laser scans, they localize the acquisition vehicle and register the ground-based facades to the airborne model by means of Monte Carlo localization (MCL). Finally, the two models are merged with different resolutions to obtain a 3D model.
Using an airborne laser altimeter, Haala, Brenner and Anders combined height data with the existing ground plans of buildings. The ground plans of buildings had already been acquired either in analog form by maps and plans or digitally in a 2D GIS. The project was done in order to enable an automatic data capture by the integration of these different types of information. Afterwards virtual reality city models are generated in the project by texture processing, e.g. by mapping of terrestrial images. The project demonstrated the feasibility of rapid acquisition of 3D urban GIS. Ground plans proved are another very important source of information for 3D building reconstruction. Compared to results of automatic procedures, these ground plans proved more reliable since they contain aggregated information which has been made explicit by human interpretation. For this reason, ground plans, can considerably reduce costs in a reconstruction project. An example of existing ground plan data usable in building reconstruction is the Digital Cadastral map, which provides information on the distribution of property, including the borders of all agricultural areas and the ground plans of existing buildings. Additionally information as street names and the usage of buildings (e.g. garage, residential building, office block, industrial building, church) is provided in the form of text symbols. At the moment the Digital Cadastral map is built up as a database covering an area, mainly composed by digitizing preexisting maps or plans.
Cost
- Terrestrial laser scan devices (pulse or phase devices) + processing software generally start at a price of €150,000. Some less precise devices (as the Trimble VX) cost around €75,000.
- Terrestrial lidar systems cost around €300,000.
- Systems using regular still cameras mounted on RC helicopters (Photogrammetry) are also possible, and cost around €25,000. Systems that use still cameras with balloons are even cheaper (around €2,500), but require additional manual processing. As the manual processing takes around one month of labor for every day of taking pictures, this is still an expensive solution in the long run.
- Obtaining satellite images is also an expensive endeavor. High resolution stereo images (0.5 m resolution) cost around €11,000. Image satellites include Quikbird, Ikonos. High resolution monoscopic images cost around €5,500. Somewhat lower resolution images (e.g. from the CORONA satellite; with a 2 m resolution) cost around €1,000 per 2 images. Note that Google Earth images are too low in resolution to make an accurate 3D model.[40]
Reconstruction
From point clouds
The point clouds produced by 3D scanners and 3D imaging can be used directly for measurement and visualisation in the architecture and construction world.
From models
Most applications, however, use instead polygonal 3D models,
- Rhino 3Detc.).
- Rhino 3D, Maya, T Splines etc.
- Solid CAD models: From an engineering/manufacturing perspective, the ultimate representation of a digitised shape is the editable, parametric CAD model. In CAD, the sphere is described by parametric features which are easily edited by changing a value (e.g., centre point and radius).
These CAD models describe not simply the envelope or shape of the object, but CAD models also embody the "design intent" (i.e., critical features and their relationship to other features). An example of design intent not evident in the shape alone might be a brake drum's lug bolts, which must be concentric with the hole in the centre of the drum. This knowledge would drive the sequence and method of creating the CAD model; a designer with an awareness of this relationship would not design the lug bolts referenced to the outside diameter, but instead, to the center. A modeler creating a CAD model will want to include both Shape and design intent in the complete CAD model.
Vendors offer different approaches to getting to the parametric CAD model. Some export the NURBS surfaces and leave it to the CAD designer to complete the model in CAD (e.g.,
From a set of 2D slices
- Volume rendering: Different parts of an object usually have different threshold values or greyscale densities. From this, a 3-dimensional model can be constructed and displayed on screen. Multiple models can be constructed from various thresholds, allowing different colours to represent each component of the object. Volume rendering is usually only used for visualisation of the scanned object.
- Image segmentation: Where different structures have similar threshold/greyscale values, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the unwanted structures from the image. Image segmentation software usually allows export of the segmented structures in CAD or STL format for further manipulation.
- Image-based meshing: When using 3D image data for computational analysis (e.g. CFD and FEA), simply segmenting the data and meshing from CAD can become time-consuming, and virtually intractable for the complex topologies typical of image data. The solution is called image-based meshing, an automated process of generating an accurate and realistic geometrical description of the scan data.
From laser scans
Laser scanning describes the general method to sample or scan a surface using laser technology. Several areas of application exist that mainly differ in the power of the lasers that are used, and in the results of the scanning process. Low laser power is used when the scanned surface doesn't have to be influenced, e.g. when it only has to be digitised. Confocal or 3D laser scanning are methods to get information about the scanned surface. Another low-power application uses structured light projection systems for solar cell flatness metrology,[41] enabling stress calculation throughout in excess of 2000 wafers per hour.[42]
The laser power used for laser scanning equipment in industrial applications is typically less than 1W. The power level is usually on the order of 200 mW or less but sometimes more.
From photographs
3D data acquisition and object reconstruction can be performed using stereo image pairs. Stereo photogrammetry or photogrammetry based on a block of overlapped images is the primary approach for 3D mapping and object reconstruction using 2D images. Close-range photogrammetry has also matured to the level where cameras or digital cameras can be used to capture the close-look images of objects, e.g., buildings, and reconstruct them using the very same theory as the aerial photogrammetry. An example of software which could do this is
A semi-automatic method for acquiring 3D topologically structured data from 2D aerial stereo images has been presented by
A method for semi-automatic building extraction together with a concept for storing building models alongside terrain and other topographic data in a topographical information system has been developed by Franz Rottensteiner. His approach was based on the integration of building parameter estimations into the photogrammetry process applying a hybrid modeling scheme. Buildings are decomposed into a set of simple primitives that are reconstructed individually and are then combined by Boolean operators. The internal data structure of both the primitives and the compound building models are based on the boundary representation methods[52][53]
Multiple images are used in Zhang's[54] approach to surface reconstruction from multiple images. A central idea is to explore the integration of both 3D stereo data and 2D calibrated images. This approach is motivated by the fact that only robust and accurate feature points that survived the geometry scrutiny of multiple images are reconstructed in space. The density insufficiency and the inevitable holes in the stereo data should then be filled in by using information from multiple images. The idea is thus to first construct small surface patches from stereo points, then to progressively propagate only reliable patches in their neighborhood from images into the whole surface using a best-first strategy. The problem thus reduces to searching for an optimal local surface patch going through a given set of stereo points from images.
Multi-spectral images are also used for 3D building detection. The first and last pulse data and the normalized difference vegetation index are used in the process.[55]
New measurement techniques are also employed to obtain measurements of and between objects from single images by using the projection, or the shadow as well as their combination. This technology is gaining attention given its fast processing time, and far lower cost than stereo measurements.[citation needed]
Applications
Space experiments
3D scanning technology has been used to scan space rocks for the European Space Agency.[56][57]
Construction industry and civil engineering
- As-built drawings of bridges, industrial plants, and monuments
- Documentation of historical sites[60]
- Site modelling and lay outing
- Quality control
- Quantity surveys
- Payload monitoring [61]
- Freeway redesign
- Establishing a bench mark of pre-existing shape/state in order to detect structural changes resulting from exposure to extreme loadings such as earthquake, vessel/truck impact or fire.
- Create GIS (geographic information system) maps[62] and geomatics.
- Subsurface laser scanning in mines and karst voids.[63]
- Forensic documentation[64]
Design process
- Increasing accuracy working with complex parts and shapes,
- Coordinating product design using parts from multiple sources,
- Updating old CD scans with those from more current technology,
- Replacing missing or older parts,
- Creating cost savings by allowing as-built design services, for example in automotive manufacturing plants,
- "Bringing the plant to the engineers" with web shared scans, and
- Saving travel costs.
Entertainment
3D scanners are used by the
. In cases where a real-world equivalent of a model exists, it is much faster to scan the real-world object than to manually create a model using 3D modeling software. Frequently, artists sculpt physical models of what they want and scan them into digital form rather than directly creating digital models on a computer.3D photography
3D scanners are evolving for the use of cameras to represent 3D objects in an accurate manner.[66] Companies are emerging since 2010 that create 3D portraits of people (3D figurines or 3D selfie).
An augmented reality menu for the Madrid restaurant chain 80 Degrees[67]
Law enforcement
3D laser scanning is used by the law enforcement agencies around the world. 3D models are used for on-site documentation of:[68]
- Crime scenes
- Bullet trajectories
- Bloodstain pattern analysis
- Accident reconstruction
- Bombings
- Plane crashes, and more
Reverse engineering
Real estate
Land or buildings can be scanned into a 3D model, which allows buyers to tour and inspect the property remotely, anywhere, without having to be present at the property.[69] There is already at least one company providing 3D-scanned virtual real estate tours.[70] A typical virtual tour Archived 2017-04-27 at the Wayback Machine would consist of dollhouse view,[71] inside view, as well as a floor plan.
Virtual/remote tourism
The environment at a place of interest can be captured and converted into a 3D model. This model can then be explored by the public, either through a VR interface or a traditional "2D" interface. This allows the user to explore locations which are inconvenient for travel.[72] A group of history students at Vancouver iTech Preparatory Middle School created a Virtual Museum by 3D Scanning more than 100 artifacts.[73]
Cultural heritage
There have been many research projects undertaken via the scanning of historical sites and artifacts both for documentation and analysis purposes.[74] The resulting models can be used for a variety of different analytical approaches.[75][76]
The combined use of 3D scanning and
Creation of 3D models for Museums and Archaeological artifacts[78][79][80]
Michelangelo
In 1999, two different research groups started scanning Michelangelo's statues. Stanford University with a group led by Marc Levoy[81] used a custom laser triangulation scanner built by Cyberware to scan Michelangelo's statues in Florence, notably the David, the Prigioni and the four statues in The Medici Chapel. The scans produced a data point density of one sample per 0.25 mm, detailed enough to see Michelangelo's chisel marks. These detailed scans produced a large amount of data (up to 32 gigabytes) and processing the data from his scans took 5 months. Approximately in the same period a research group from IBM, led by H. Rushmeier and F. Bernardini scanned the Pietà of Florence acquiring both geometric and colour details. The digital model, result of the Stanford scanning campaign, was thoroughly used in the 2004 subsequent restoration of the statue.[82]
Monticello
In 2002, David Luebke, et al. scanned Thomas Jefferson's Monticello.[83] A commercial time of flight laser scanner, the DeltaSphere 3000, was used. The scanner data was later combined with colour data from digital photographs to create the Virtual Monticello, and the Jefferson's Cabinet exhibits in the New Orleans Museum of Art in 2003. The Virtual Monticello exhibit simulated a window looking into Jefferson's Library. The exhibit consisted of a rear projection display on a wall and a pair of stereo glasses for the viewer. The glasses, combined with polarised projectors, provided a 3D effect. Position tracking hardware on the glasses allowed the display to adapt as the viewer moves around, creating the illusion that the display is actually a hole in the wall looking into Jefferson's Library. The Jefferson's Cabinet exhibit was a barrier stereogram (essentially a non-active hologram that appears different from different angles) of Jefferson's Cabinet.
Cuneiform tablets
The first 3D models of
Kasubi Tombs
A 2009
"Plastico di Roma antica"
In 2005, Gabriele Guidi, et al. scanned the "Plastico di Roma antica",[91] a model of Rome created in the last century. Neither the triangulation method, nor the time of flight method satisfied the requirements of this project because the item to be scanned was both large and contained small details. They found though, that a modulated light scanner was able to provide both the ability to scan an object the size of the model and the accuracy that was needed. The modulated light scanner was supplemented by a triangulation scanner which was used to scan some parts of the model.
Other projects
The 3D Encounters Project at the Petrie Museum of Egyptian Archaeology aims to use 3D laser scanning to create a high quality 3D image library of artefacts and enable digital travelling exhibitions of fragile Egyptian artefacts, English Heritage has investigated the use of 3D laser scanning for a wide range of applications to gain archaeological and condition data, and the National Conservation Centre in Liverpool has also produced 3D laser scans on commission, including portable object and in situ scans of archaeological sites.[92] The Smithsonian Institution has a project called Smithsonian X 3D notable for the breadth of types of 3D objects they are attempting to scan. These include small objects such as insects and flowers, to human sized objects such as Amelia Earhart's Flight Suit to room sized objects such as the Gunboat Philadelphia to historic sites such as Liang Bua in Indonesia. Also of note the data from these scans is being made available to the public for free and downloadable in several data formats.
Medical CAD/CAM
3D scanners are used to capture the 3D shape of a patient in
Many chairside dental CAD/CAM systems and dental laboratory CAD/CAM systems use 3D scanner technologies to capture the 3D surface of a dental preparation (either in vivo or in vitro), in order to produce a restoration digitally using CAD software and ultimately produce the final restoration using a CAM technology (such as a CNC milling machine, or 3D printer). The chairside systems are designed to facilitate the 3D scanning of a preparation in vivo and produce the restoration (such as a Crown, Onlay, Inlay or Veneer).
Creation of 3D models for anatomy and biology education[94][95] and cadaver models for educational neurosurgical simulations.[96]
Quality assurance and industrial metrology
The digitalisation of real-world objects is of vital importance in various application domains. This method is especially applied in industrial quality assurance to measure the geometric dimension accuracy. Industrial processes such as assembly are complex, highly automated and typically based on CAD (computer-aided design) data. The problem is that the same degree of automation is also required for quality assurance. It is, for example, a very complex task to assemble a modern car, since it consists of many parts that must fit together at the very end of the production line. The optimal performance of this process is guaranteed by quality assurance systems. Especially the geometry of the metal parts must be checked in order to assure that they have the correct dimensions, fit together and finally work reliably.
Within highly automated processes, the resulting geometric measures are transferred to machines that manufacture the desired objects. Due to mechanical uncertainties and abrasions, the result may differ from its digital nominal. In order to automatically capture and evaluate these deviations, the manufactured part must be digitised as well. For this purpose, 3D scanners are applied to generate point samples from the object's surface which are finally compared against the nominal data.[97]
The process of comparing 3D data against a CAD model is referred to as CAD-Compare, and can be a useful technique for applications such as determining wear patterns on moulds and tooling, determining accuracy of final build, analysing gap and flush, or analysing highly complex sculpted surfaces. At present, laser triangulation scanners, structured light and contact scanning are the predominant technologies employed for industrial purposes, with contact scanning remaining the slowest, but overall most accurate option. Nevertheless, 3D scanning technology offers distinct advantages compared to traditional touch probe measurements. White-light or laser scanners accurately digitize objects all around, capturing fine details and freeform surfaces without reference points or spray. The entire surface is covered at record speed without the risk of damaging the part. Graphic comparison charts illustrate geometric deviations of full object level, providing deeper insights into potential causes.[98] [99]
Object reconstruction
After the data has been collected, the acquired (and sometimes already processed) data from images or sensors needs to be reconstructed. This may be done in the same program or in some cases, the 3D data needs to be exported and imported into another program for further refining, and/or to add additional data. Such additional data could be GPS-location data. After the reconstruction, the data might be directly implemented into a local (GIS) map[100][101] or a worldwide map such as Google Earth or Apple Maps.
Software
Several software packages are used in which the acquired (and sometimes already processed) data from images or sensors is imported. Notable software packages include:[102]
- Qlone
- 3DF Zephyr
- Canoma
- Leica Photogrammetry Suite
- MeshLab
- MountainsMap SEM (microscopy applications only)
- PhotoModeler
- SketchUp
- tomviz
See also
- 3D computer graphics software
- 3D printing
- 3D reconstruction
- 3D selfie
- Angle-sensitive pixel
- Depth map
- Digitization
- Epipolar geometry
- Full body scanner
- Image reconstruction
- Light-field camera
- Photogrammetry
- Range imaging
- Remote sensing
- Replicator
- Structured-light 3D scanner
- Thingiverse
References
- S2CID 3345516.
- .
- S2CID 9881027.
- S2CID 8464855.
- .
- ^ Scott, Clare (2018-04-19). "3D Scanning and 3D Printing Allow for Production of Lifelike Facial Prosthetics". 3DPrint.com.
- ^ O'Neal, Bridget (2015-02-19). "CyArk 500 Challenge Gains Momentum in Preserving Cultural Heritage with Artec 3D Scanning Technology". 3DPrint.com.
- S2CID 15779281.
- ^ "Matter and Form - 3D Scanning Hardware & Software". matterandform.net. Retrieved 2020-04-01.
- ^ OR3D. "What is 3D Scanning? - Scanning Basics and Devices". OR3D. Retrieved 2020-04-01.
{{cite web}}
: CS1 maint: numeric names: authors list (link) - ^ "3D scanning technologies - what is 3D scanning and how does it work?". Aniwaa. Retrieved 2020-04-01.
- ^ "what is 3d scanning". laserdesign.com.
- CiteSeerX 10.1.1.472.8586.
- ISBN 978-1-901725-46-9. Retrieved 8 March 2024.
- ^ "Seismic 3D data acquisition". Archived from the original on 2016-03-03. Retrieved 2021-01-24.
- ^ "Optical and laser remote sensing". Archived from the original on 2009-09-03. Retrieved 2009-09-09.
- S2CID 442358.
- S2CID 2084943.
- S2CID 17914887.
- ISBN 0-7695-2223-8.
- S2CID 20531808.
- ^ "Understanding Technology: How Do 3D Scanners Work?". Virtual Technology. Archived from the original on 8 December 2020. Retrieved 8 November 2020.
- PMID 19724327.
- S2CID 3576337.
- S2CID 2921156.
- ^ Trost, D. (1999). U.S. Patent No. 5,957,915. Washington, DC: U.S. Patent and Trademark Office.
- .
- .
- PMID 20389536.
- PMID 20588818.
- PMID 21445150.
- ^ "Sussex Computer Vision: TEACH VISION5". Archived from the original on 2008-09-20.
- ^ "Geodetic Systems, Inc". www.geodetic.com. Retrieved 2020-03-22.
- ^ "What Camera Should You Use for Photogrammetry?". 80.lv. 2019-07-15. Retrieved 2020-03-22.
- ^ "3D Scanning and Design". Gentle Giant Studios. Archived from the original on 2020-03-22. Retrieved 2020-03-22.
- ^ Semi-Automatic building extraction from LIDAR Data and High-Resolution Image
- ^ 1Automated Building Extraction and Reconstruction from LIDAR Data (PDF) (Report). p. 11. Archived from the original (PDF) on 14 September 2020. Retrieved 9 September 2019.
- ^ "Terrestrial laser scanning". Archived from the original on 2009-05-11. Retrieved 2009-09-09.
- ^ Haala, Norbert; Brenner, Claus; Anders, Karl-Heinrich (1998). "3D Urban GIS from Laser Altimeter and 2D Map Data" (PDF). Institute for Photogrammetry (IFP).
- ^ Ghent University, Department of Geography
- ^ "Glossary of 3d technology terms". 23 April 2018.
- S2CID 121768537.
- ^ Vexcel FotoG
- ^ "3D data acquisition". Archived from the original on 2006-10-18. Retrieved 2009-09-09.
- ^ "Vexcel GeoSynth". Archived from the original on 2009-10-04. Retrieved 2009-10-31.
- ^ "Photosynth". Archived from the original on 2017-02-05. Retrieved 2021-01-24.
- ^ 3D data acquisition and object reconstruction using photos
- ^ 3D Object Reconstruction From Aerial Stereo Images (PDF) (Thesis). Archived from the original (PDF) on 2011-07-24. Retrieved 2009-09-09.
- ^ "Agisoft Metashape". www.agisoft.com. Retrieved 2017-03-13.
- ^ "RealityCapture". www.capturingreality.com/. Retrieved 2017-03-13.
- ^ "3D data acquisition and modeling in a Topographic Information System" (PDF). Archived from the original (PDF) on 2011-07-19. Retrieved 2009-09-09.
- ^ "Performance evaluation of a system for semi-automatic building extraction using adaptable primitives" (PDF). Archived from the original (PDF) on 2007-12-20. Retrieved 2009-09-09.
- ISBN 978-3-9500791-3-5.
- S2CID 206769306.
- ^ "Multi-spectral images for 3D building detection" (PDF). Archived from the original (PDF) on 2011-07-06. Retrieved 2009-09-09.
- ^ "Science of tele-robotic rock collection". European Space Agency. Retrieved 2020-01-03.
- ^ Scanning rocks, retrieved 2021-12-08
- .
- ^ Landmark detection by a rotary laser scanner for autonomous robot navigation in sewer pipes Archived 2011-07-17 at the Wayback Machine, Matthias Dorn et al., Proceedings of the ICMIT 2003, the second International Conference on Mechatronics and Information Technology, pp. 600- 604, Jecheon, Korea, Dec. 2003
- .
- S2CID 8147627.
- ISBN 978-1-4666-2039-1.
- ^ Murphy, Liam. "Case Study: Old Mine Workings". Subsurface Laser Scanning Case Studies. Liam Murphy. Archived from the original on 2012-04-18. Retrieved 11 January 2012.
- ^ "Forensics & Public Safety". Archived from the original on 2013-05-22. Retrieved 2012-01-11.
- ^ "The Future of 3D Modeling". GarageFarm. 2017-05-28. Retrieved 2017-05-28.
- ^ Curless, B., & Seitz, S. (2000). 3D Photography. Course Notes for SIGGRAPH 2000.
- ^ "Códigos QR y realidad aumentada: la evolución de las cartas en los restaurantes". La Vanguardia (in Spanish). 2021-02-07. Retrieved 2021-11-23.
- ^ "Crime Scene Documentation".
- .
- ^ "Matterport Surpasses 70 Million Global Visits and Celebrates Explosive Growth of 3D and Virtual Reality Spaces". Market Watch. Retrieved 19 December 2016.
- ^ "The VR Glossary". 29 August 2016. Retrieved 26 April 2017.
- .
- ^ Gillespie, Katie (May 11, 2018). "Virtual reality translates into real history for iTech Prep students". The Columbian. Retrieved 2021-12-09.
- S2CID 16510261.
- S2CID 253353315.
- S2CID 225390638.
- S2CID 26690232.
- ProQuest 2585423206.
- ^ "Submit your artefact". www.imaginedmuseum.uk. Retrieved 2021-11-23.[permanent dead link]
- ^ "Scholarship in 3D: 3D scanning and printing at ASOR 2018". The Digital Orientalist. 2018-12-03. Retrieved 2021-11-23.
- ^ Marc Levoy; Kari Pulli; Brian Curless; Szymon Rusinkiewicz; David Koller; Lucas Pereira; Matt Ginzton; Sean Anderson; James Davis; Jeremy Ginsberg; Jonathan Shade; Duane Fulk (2000). "The Digital Michelangelo Project: 3D Scanning of Large Statues" (PDF). Proceedings of the 27th annual conference on Computer graphics and interactive techniques. pp. 131–144.
- ISBN 978-88-09-03325-2.
- ^ David Luebke; Christopher Lutz; Rui Wang; Cliff Woolley (2002). "Scanning Monticello".
- ^ "Tontafeln 3D, Hetitologie Portal, Mainz, Germany" (in German). Retrieved 2019-06-23.
- S2CID 676588.
- ISBN 978-3-905674-29-3.
- S2CID 211026941.
- ^ Scott Cedarleaf (2010). "Royal Kasubi Tombs Destroyed in Fire". CyArk Blog. Archived from the original on 2010-03-30. Retrieved 2010-04-22.
{{cite news}}
: CS1 maint: numeric names: authors list (link) - ISBN 0-7695-2327-7.
- .
- ^ "3D Body Scanner for Body Scanning in Medicine Field | Scantech". 2020-08-27. Retrieved 2023-11-15.
- S2CID 234497497.
- doi:10.15027/50609.
- PMID 34662905.
- ^ Christian Teutsch (2007). Model-based Analysis and Evaluation of Point Sets from Optical 3D Laser Scanners (PhD thesis).
- ^ "3D scanning technologies". Retrieved 2016-09-15.
- ^ Timeline of 3D Laser Scanners
- ^ "Implementing data to GIS map" (PDF). Archived from the original (PDF) on 2003-05-06. Retrieved 2009-09-09.
- ^ 3D data implementation to GIS maps
- ISBN 978-3-540-72134-5.