This article How 3D printing glitches inspired these Doctor Who effects was recently brought to my attention. The article (and several others like it on the web) talks about how 3D printing “glitches” like my Zombie Army print fails were the inspiration for the BBC show’s villains, the “Boneless”. To my great surprise and delight, the example 3D print fail being cited as influential on the Boneless was one of my earliest self portraits! As a Doctor Who Fan going all the way back to middle school, its quite an honor to have been of service to the Doctor.
There’s only a few weeks until my class TaDDDaa!, which covers a range of digital processes for glass casting at the Pilchuck Glass School in Stanwood Washington and there’s still a couple slots available! Session 2 runs May 30-June 17. The course will combine a 3D modeling, scanning and 3D printing/CNC routing track with a physical track for glass casting with lost PLA kiln casting and hot casting into CNC carved graphite molds.
Apply today and join me in the woods for the time of your life!
This article is an apology to Nora Al-Badri and Jan Nikolai Nelles’. I hope my questioning of your methods doesn’t land you in hot water. Last week, my post There’s Something Fishy About the Other Nerfertiti really blew the lid off a debate that has recently had a lot of chatter in online 3D scanning, printing and art circles. My post debunked the possibility that your art project The Other Nefertiti was scanned using an Xbox Kinect sensor. Immediately after my posting, the internet seemingly exploded with commentary about how the “Nefertitihack” had to be a hoax. At the end of the day, I don’t think it matters whether you scanned the artifact with a Kinect or procured a file from other sources because what you have created is a wonderful dialogue about the provenance of antiquities, ownership, copying and a strong argument for openness and sharing of data. What’s great is that this priceless antiquity is now within reach of anyone who cares to experience it and people are interacting with Nefertiti in unique and personal ways they never have been able to before.
The repercussions of my post began with Hacker News picking up on my post, which produced a ton of traffic and some great comments. The Nefertiti Hack: Digital Repatriation or Theft? was the first press I’m aware of that began to seriously consider that things with the story didn’t add up. Hyperallergic’s Claire Voon then produced some of the most insightful commentary Could the Nefertiti Scan Be a Hoax — and Does that Matter?, and it all culminated with the Times’ Charly Wilder’s follow up Nefertiti 3-D Scanning Project in Germany Raises Doubts (She was aghast that none of the three experts she had spoken to for her original article had picked up on the possibility that the data was too good for their capture method).
So we may never know the truth about the true origin of the file. I tend to think that it is somehow derived from an official museum scan, which was somehow copied in the creation of their 3D printed edition). However, the real conclusion I’ve come to is that it doesn’t matter. What matters is something the internet has taught us all along. From Wikileaks to Nefertiti, information wants to be free. Already, a quick google search will turn up a range of Nefertiti derivative works ranging from interactive art to Lego Minifig and Pez heads, voronoi (swiss cheese) versions, Nefertiti vases and many more. What will be done in the future with her data, only time will tell.
Some Nefertiti inspired works that are already making the rounds. The best is yet to come.
Perhaps the best argument for museums being open with their data is the talk below that was just posted by Cosmo Wenman. Cosmo is an artist who has been active in scanning antiquities from museum collections for several years now. He has scanned a number of famous objects and casts made from objects, including the Venus de Milo and Winged Victory of Samothrace. The talk likens these 3D scans to the 19th century tradition of making plaster casts from classical sculptures and makes a strong case by showing several examples of how people have used the data he shared. This is a position that has been shared by many museums, including the Media Lab of the Metropolitan Museum of Art here in NYC, and I sincerely hope this trend will continue. The promise of the internet is that all of man’s knowledge will be at our fingertips. The Neues Museum would be wise to take note.
The New York Times and countless other news outlets picked up on a story that’s been making the rounds of the 3D printing community the last few weeks; Swiping a Priceless Antiquity … With a Scanner and a 3-D Printer tells the story of two artists; Nora Al-Badri and Jan Nikolai Nelles, who apparently succeeded in making a massive digital art heist when they surreptitiously scanned Nefertiti’s bust in Berlin’s Neues Museum. It’s great art commentary with many levels; It talks to issues around digital copy vs. originals, who is allowed to own these copies, and going deeper, questions the museum’s right to own the statue which rightfully should belong to Egypt. By releasing the files for The Other Nefertiti free online as a torrent, the artists have initiated a huge debate with many layers. The 3D Printing community is not without its debate about this work either. The video the artists share that shows them sneaking a kinect scan from beneath Ms. Badri’s scarf raises a lot of questions as to whether the artists even scanned the artifact as they claim.
The video shows the two using a Kinect Xbox controller to capture Nefertiti, and while I have no doubt the artists may have done the Kinect stunt, there is simply no way the scan being distributed was made with a Kinect. Simply put, the scan being distributed which is made of more than 2 million triangles is far too detailed to have been made with that hardware. Even if the device was “hacked” as news reports state, this quality is not achievable with this method. Problems in the story include how the unit was powered (Kinect requires a wall outlet), but even if they carried some kind of battery, Nefertiti is under glass which also causes errors in this method of scanning. Other questions include where was the connected laptop hidden? Then, the video shows Ms. Badri repeatedly covering and recovering the Kinect as she circles Nefertiti. A normal scan would require uninterrupted line of sight of the statue as the scan is happening. In order to get coverage of the top of the headdress, the scanner would have to be held high above the statue, and not at waist level as the video indicates. Maybe multiple scans might have been made from various vantage points and meshed together later, but truthfully the scan is way too high fidelity for the way the Kinect works. The device sprays a grid of points of infrared light over the subject and each point is measured as a distance measurement while the computer software does the math to turn that into a 3D model. Its a fast way to generate a 3d model, but with accuracy that is at best only within a few millimeters.
So if it wasn’t made with a Kinect, How was this scan created, and why lie about it? When confronted about how the scan was made on the 3D in Review Podcast, Mr. Nelles is vague in his answers and claimed that he and Ms. Badri knew nothing about the device and that some hacker types had set up the hardware for them. The “hackers” then took the data and created the model for them. When asked about the hacker’s technique, Mr. Nelles stated that the hackers had left for New Zealand and were unable to be contacted.
One theory is that the scan is actually generated by Photogrammetry, a technique of capturing images of the sculpture from a variety of angles. The images are then fed into software such as Agisoft Photoscan that analyzes all the images for common points, and generates a 3D model of the subject. Paul Docherty is a researcher who has extensively used photogrammetry to reconstruct historic artifacts and sites, including a model of Nefertiti’s bust using available imagery he gathered online. He catalogued the process in his article 3D Modelling the Bust of Queen Nefertiti, and also spoke on the 3D in Review podcast about his efforts. Mr. Docherty has since gone on to question the Nefertiti Hack scan in his article Nefertiti Hack – Questions regarding the 3D scan of the bust of Nefertiti, in which he agrees that there is no way this scan was captured with a Kinect. So, its possible that the scan could have been made using a series of 45-120 high res images covertly gathered with a cellphone, but if that’s the way it was done, why show the Kinect in the video?
The last possibility and reigning theory is that Ms. Badri and Mr. Nelles elusive hacker partners are literally real hackers who stole a copy of the high resolution scan from the Museum’s servers. A high resolution scan must exist as a high res 3D printed replica is already available for sale online. Museum officials have dismissed the Other Nefertiti model as “of minor quality”, but that’s not what we are seeing in this highly detailed scan. Perhaps the file was obtained from someone involved in printing the reproduction, or it was a scan made of the reproduction? Indeed, the common belief in online 3D Printing community chatter is that the Kinect “story” is a fabrication to hide the fact that the model was actually stolen data from a commercial high quality scan. If the artists were behind a server hack, the legal ramifications for them are much more serious than scanning the object, which has few, if any legal precedents.
What do you think? Will we ever know? Nefertiti’s data liberation has certainly sparked some controversy and I look forward to seeing what the community creates from it, as well as what truths come out as the story unfolds!
I’m please to announce that I will be teaching a class at the Pilchuck Glass School in June of 2016. The class will cover a lot of the digital to physical techniques that I have been working with over the last few years, particularly this past year with my fellowship at Wheaton Arts Creative GLass Center of America (Stay tuned, I will be publishing my finding soon).
The intensive 3 week class will take place in June of 2016. Pilchuck’s session 2 this year has a theme of Play, and we certainly will be playing with ways of bringing the 21st century to glassmaking’s 19th century traditions.
May 31- June 17
3-D Modeling & Printing, Lost PLA, CNC, Kilncasting, Hot Casting
Digital sculpting and 3-D printing tools allow artists to visualize prototypes, manipulate scale, and replicate with precision. This course will introduce an assortment of tools including Zbrush (an organic sculpting software), 3-D printers, and 3-D scanning methods of photogrammetry and structured light. Students will learn to scan, manipulate, and print objects and ultimately kilncast and hot cast them in glass. This class is for glass artists who wish to explore digital fabrication and 3-D artists who wish to explore glass.
The class will cover:
- 3D Modeling with Zbrush
- 3D Scanning with structured light and photogrammetry
- 3D Printing
- Glass kiln casting with lost PLA
- CNC milling of graphite molds for hot glass casting
It’s going to be an awesome class and I’m looking for two teaching assistants. Applications for TA’s are due February 3. I would like to find one TA who is well versed in 3D modeling and 3D printing (its a + is you own or have built a 3D Printer), and another who is familiar with glass kiln casting. Please help spread the word!
Before the year ends I wanted to write about some of the events I was able to be a part of this year with the Scan-A-Rama 3D Portrait Studio. I had a spectacular year of making 3D portraits for thousands of people at a slew of events that included talks as well as scanning events at the Innovation Loft, Midwest Reprap Festival, Newark New Jersey, Westport Conneticut, Bay Area and New York Maker Faires, at Pepsico’s Executive World Summit, and Theatre Bizarre. I also spent the year on an artist fellowship at Wheaton Arts’ Creative Glass Center of America in which I innovated and explored methods of glass casting from digitally designed objects. Stay tuned for more on this- I will be posting the findings from this fellowship in January (I am just wrapping up there now). Finally, I went out to Seattle and set up a bot lab at the Pilchuck Glass School, where I will be teaching a class TaDDDaa!! in June of 2016. The class will cover techniques of glass casting from a digital workflow that includes 3D sculpting, scanning, printing, and CNC carving. If this list isn’t impressive enough for you, I also had two amazing corporate gigs in November that I’ve been meaning to write up- The first being at Google of their internal user experience conference; Google UXU, and the second for Pfizer at their Excelerate innovation conference!
Google’s UXU is an internal conference for the company’s entire UX ladder to meet up, compare notes on best practices, hone skills and get inspired to create some of the world’s best digital products. I was honored to be invited to come out to the Googleplex and deliver the conference’s opening keynote address, one expanded from my talk at Bay Area Maker Faire about the History of Technology as Entertainment that included my work on recreating Luna Park with 3D printing, and ended with some conjectures about 3D printing and what will happen when hot rod culture gets a hold of self driving cars. Was the talk well received? Here’s what one attendee said:
November was busy! I had barely returned from the Googleplex when Pfizer’s Excelerate conference began and I spent two days making 3D portraits of some of the movers and shakers of one of the world’s largest pharmaceutical companies. I even made a portrait of the CEO, Ian Read!
2015 was an amazing year for me. I can’t say I got rich, but as the first year out on my own doing what I love without working for someone else, I think I did pretty well for myself. Just writing it up now, I realize that Fredini Enterprises has done some great work… and there’s a lot more where that came from! I have a lot of exciting things in the pipes for 2016 – new products, artwork, inventions and more, so stay tuned.
Happy New Year Everyone!
I was recently at the Brooklyn Museum for the opening of the exhibit Coney Island: Visions of an American Dreamland, 1861–2008. Its a great exhibit and I encourage everyone to go check it out! While there, I took the time to take pictures of several objects to use to generate 3D models of a few Coney Island Artifacts, as well as some beautiful architectural details.
This process of photogrammetry or “physical photography” as I have come to call it involves photographing an object many times from all angles, taking care to ensure that each image is in full focus. Once photographed, software analyzes the image to find the same point in multiple images and generates a 3D model of where in space each camera was. From there, a point cloud and 3D mesh can be generated. Its a laborious process but its a very accurate way of generating 3D models of still objects like sculpture.
Here’s the processed scans: