How do you unwrap a mummy? New technique doesn't need human hands
New technology virtually unwraps mummies.
Some fields of science encourage destruction — like smashing atoms together at high speeds or burrowing into the crust of the earth for geological samples. But other sciences, like archaeology, require a much lighter touch.
So light, in fact, that scientists can undress a mummy without even lifting a piece of its wrapping using algorithms and machine learning. Yet even with advanced technology, this process can still be time-consuming and computationally complex.
Now, a team of scientists from France and Malta has designed a technique that works better than ever before and with a fraction of the computing power.
Johann Briffa is an associate professor of communications and computing engineering at the University of Malta and senior author of the new study published Wednesday in PLOS ONE.
Briffa says their new technique could have applications far beyond archaeology, including paleontology, geology, and even medical imaging.
“In principle, our method can be used on any volumetric image,” Briffa tells Inverse.
What’s new — Long gone are the days where mummified remains were desecrated for the sake of science. Instead, archaeologists have been using x-ray scans — like a human CT scan — and algorithms for decades now to study these delicate specimens. The name of this game is called “segmentation,” Briffa says.
“Segmentation is the process of labeling image pixels (or [3D] ‘voxels’), based on the required semantic context,” Briffa says. “In our case, we label the voxels to identify different materials that make up the specimen — such as bone, soft tissue, etc. It's an operation on the scanned image, so it is always done virtually.”
Briffa explains that these voxels can then be individually selected for — like selecting only bone voxels — to virtually “unwrap” the specimen. A common technique for doing this is deep learning which uses a complex neural network to create these voxels, but Briffa says it also has significant drawbacks.
“The method that we developed uses classical machine learning with 3D features, as opposed to existing deep-learning methods that operate on the volume slice by slice,” Briffa says. “This avoids the discontinuities that often occur in a slice-by-slice approach.”
Why it matters — One of the advantages of their technique is its ability to scale, Briffa says. This means that this technique could be more easily accessible to other types of projects, and its lower complexity could also come with a lower price tag.
This combination means that more and more people can do this kind of non-invasive analysis, from living humans to ancient remains.
What they did — In this work, Briffa and colleagues tested their new technique on four mummy specimens from the Grenoble Museum of Natural History in France with estimated dating from the Ptolemaic and Roman period (around 3rd century BC to 4th century AD). The specimen included:
- A mummified puppy
- Two mummified ibis birds
- A mummified raptor
The team created volumetric images of the specimen using a synchrotron — which emits high-powered x-rays — and then passed them along to the machine learning algorithm. The algorithm then identified different types of materials within the images, such as differentiating between soft tissue and bone, with limited human interaction.
Comparing the results of their technique to other deep-learning methods, Briffa and colleagues report that they achieved 94 to 98 percent accuracy in segmenting these voxels compared to 97 to 99 percent accuracy from deep learning.
What’s next — In addition to expanding this technique to other disciplines, Briffa says the team is also interested in how their work can be applied to deep learning methods as well.
For example, one next step for this work is to develop a way to apply these same complexity reduction techniques to deep learning models. This would help make this technique more accessible and make it possible to use deep learning in a “full 3D context.”
Abstract: Propagation Phase Contrast Synchrotron Microtomography (PPC-SRμCT) is the gold standard for non-invasive and non-destructive access to internal structures of archaeological remains. In this analysis, the virtual specimen needs to be segmented to separate different parts or materials, a process that normally requires considerable human effort. In the Automated SEgmentation of Microtomography Imaging (ASEMI) project, we developed a tool to automatically segment these volumetric images, using manually segmented samples to tune and train a machine learning model. For a set of four specimens of ancient Egyptian animal mummies we achieve an overall accuracy of 94–98% when compared with manually segmented slices, approaching the results of off-the-shelf commercial software using deep learning (97–99%) at much lower complexity. A qualitative analysis of the segmented output shows that our results are close in terms of usability to those from deep learning, justifying the use of these techniques.