If you’ve ever tried to write something long – a thesis, a book, or a manifesto outlining your disappointment in the modern technocracy and your plan to foment violent revolution – you know that distractions can slow you down or even stop the creative process. That’s why the folks at Astrohaus created the , a distraction-free typewriter, and it’s always why they are launching the , a laptop-like word processor that’s designed for writing and nothing else. The product, which I saw last week, consists of a hearty, full-sized keyboard and an E ink screen. There are multiple “documents” you can open and close and the system autosaves and syncs to services like automatically. The laptop costs $279 on Indiegogo and will have a retail price of $599. The goal of the Freewrite is to give you a place to write. You pull it out of your bag, open it, and start typing. That’s it. There are no Tweets, Facebook sharing systems, or games. It lasts for four weeks on one charge – a bold claim but not impossible – and there are some improvements to the editing functions including virtual arrow keys that let you move up and down in a document as you write. There are also hotkeys to bring up ancillary information like outlines, research, or notes. If the Traveler is anything like the original Freewrite then you can expect some truly rugged hardware. I tested an early model and the entire thing was built like a tank or, more correctly, like a Leica. Because it is aimed at the artistic wanderer, the entire thing weighs two pounds and is about as big as the . Is it for you? Well, if you liked the original or even missed the bandwagon when it first launched, you might really enjoy the Traveler. Because it is small and light it could easily become a second writing device for your more creative work that you pull out in times of pensive creativity. It is not a true word processor replacement, however, and it is a “first-thought-best-thought” kind of tool, allowing you to get words down without much fuss. I wouldn’t recommend it for research-intensive writing but you could easily sketch out almost any kind of document on the Traveler and then edit it on a real laptop. There aren’t many physical tools to support distraction-free writing. Some folks, myself included, have used the infamous , a crazy old word processor used by students or simply set up laptops without a Wi-Fi connection. The Freewrite Traveler takes all of that to the next level by offering the simplest, clearest, and most distraction-free system available. Given it’s 50% off right now on Indiegogo it might be the right time to take the plunge.
Just when you thought you were safe from IoT on your keyboard has come out with the 5Q, a smart keyboard that can send you notifications and change colors based on the app you’re using. These kinds of keyboards aren’t particularly new – you can find gaming keyboards that light up all the colors of the rainbow. But the 5Q is almost completely programmable and you can connect to the automation services or Zapier. This means you can do things like blink the Space Bar red when someone passes your Nest camera or blink the Tab key white when the outdoor temperature falls below 40 degrees. You can also make a key blink when someone Tweets which could be helpful or frustrating: The $249 keyboard is delightfully rugged and the switches – called and made by Das Keyboard – are nicely clicky but not too loud. The keys have a bit of softness to them at the half-way point so if you’re used to Cherry-style keyboards you might notice a difference here. That said the keys are rated for 100 million actuations, far more than any competing switch. The RGB LEDs in each key, as you can see below, are very bright and visible but when the keys lights are all off the keyboard is completely unreadable. This, depending on your desire to be , is a feature or a bug. There is also a media control knob in the top right corner that brings up the Q app when pressed. The entire package is nicely designed but the 5Q begs the question: do you really need a keyboard that can notify you when you get a new email? The Mac version of the software is also a bit buggy right now but they are updating it constantly and I was able to install it and run it without issue. Weird things sometimes happen, however. For example currently my Escape and F1 keys are now blinking red and I don’t know how to turn them off. That said, makes great keyboards. They’re my absolute favorite in terms of form factor and key quality and if you need a keyboard that can notify you when a cryptocurrency goes above a certain point or your Tesla stock is about to tank, look no further than the 5Q. It’s a keyboard for hackers by hackers and, as you can see below, the color transitions are truly mesmerizing. My keyboard glows — John Biggs (@johnbiggs)
For hobbyist photographers like myself, Hasselblad has always been the untouchable luxury brand reserved for high-end professionals. To fill the gap between casual and intended photography, they released the X1D — a compact, mirrorless medium format. Last summer when Stefan Etienne the newly released camera, I asked to take a picture. After importing the raw file into Lightroom and flipping through a dozen presets, I joked that I would eat Ramen packets for the next year so I could buy this camera. It was that impressive. XCD 3.5/30mm lens Last month Hasselblad sent us the XCD 4/21mm (their latest ultra wide-angle lens) for a two-week review, along with the X1D body and XCD 3,2/90mm portrait lens for comparison. I wanted to see what I could do with the kit and had planned the following: Swipe right on everyone with an unflattering Tinder profile picture and offer to retake it for them Travel somewhere with spectacular landscapes My schedule didn’t offer much time for either, so a weekend trip to the cabin would have to suffice. [gallery type="slideshow" link="none" columns="1" size="full" ids="1722181,1722182,1722183,1722184,1722185,1722186,1722187,1722188,1722201"] As an everyday camera The weekend upstate was rather quiet and uneventful, but it served to be the perfect setting to test out the camera kit because the X1D is slow A. F. It takes approximately 8 seconds to turn on, with an additional 2-3 seconds of processing time after each shutter click — top that off with a slow autofocus, slow shutter release and short battery life (I went through a battery within a day, approximately 90 shots fired). Rather than reiterating Stefan’s review, I would recommend reading it for full specifications. Coming from a Canon 5D Mark IV, I’m used to immediacy and a decent hit rate. The first day with the Hasselblad was filled with constant frustration from missed moments, missed opportunities. It felt impractical as an everyday camera until I shifted toward a more deliberate approach — reverting back to high school SLR days when a roll of film held a limited 24 exposures. When I took pause, I began to appreciate the camera’s details: a quiet shutter, a compact but sturdy body and an intuitive interface, including a touchscreen LCD display/viewfinder. [gallery type="slideshow" link="none" columns="1" size="full" ids="1722796,1722784,1722775"] Nothing looks or feels cheap about the Swiss-designed, aluminum construction of both the body and lenses. It’s heavy for a mirrorless camera, but it feels damn good to hold. XCD 4/21mm lens [gallery type="slideshow" link="none" columns="1" size="full" ids="1722190,1722191,1722489,1722490"] Dramatic landscapes and cityscapes without an overly exaggerated perspective — this is where the XCD 4/21mm outperforms other super wide-angle lenses. With a 105° angle of view and 17mm field of view equivalent on a full-framed DSLR, I was expecting a lot more distortion and vignetting, but the image automatically corrected itself and flattened out when imported into Lightroom. The latest deployment of Creative Cloud has the Hasselblad (camera and lens) profile integrated into Lightroom, so there’s no need for downloading and importing profiles. Oily NYC real estate brokers should really consider using this lens to shoot their dinky 250 sq. ft. studio apartments to feel grand without looking comically fish-eyed. XCD 3,2/90mm lens The gallery below was shot using only the mirror’s vanity lights as practicals. It was also shot underexposed to see how much detail I could pull in post. , so you don’t have to wait for each 110mb file to load. [gallery type="slideshow" link="none" columns="1" size="full" ids="1722193,1722194,1722195,1722196"] I’d like to think that if I had time and was feeling philanthropic, I could fix a lot of love lives on Tinder with this lens. Where it shines Normally, images posted in reviews are unedited, but I believe the true test of raw images lies in post-production. This is where the X1D’s slow processing time and quick battery drainage pays off. With the camera’s giant 50 MP 44 x 33mm CMOS sensor, each raw file was approximately 110mb (compared to my Mark IV’s 20-30mb) — that’s a substantial amount of information packed into 8272 x 6200 pixels. and : While other camera manufacturers tend to favor certain colors and skin tones, Dan Wang, a Hasselblad rep, told me, “We believe in seeing a very natural or even palette with very little influence. We’re not here to gatekeep what color should be. We’re here to give you as much data as possible, providing as much raw detail, raw color information that allows you to interpret it to your extent.” As someone who enjoys countless hours tweaking colors, shifting pixels and making things pretty, I’m appreciative of this. It allows for less fixing, more creative freedom. Who is this camera for? My friend Peter, a fashion photographer (he’s done editorial features for Harper’s Bazaar, Cosmopolitan and the likes), is the only person I know who shoots on Hasselblad, so it felt appropriate to ask his opinion. “It’s for pretentious rich assholes with money to burn,” he snarked. I disagree. The X1D is a solid step for Hasselblad to get off heavy-duty tripods and out of the studio. At this price point though, one might expect the camera to do everything, but it’s aimed at a narrow demographic: a photographer who is willing to overlook speediness for quality and compactibility. With smartphone companies like Apple and Samsung stepping up their camera game over the past few years, the photography world feels inundated with inconsequential, throw-away images (self-indulgent selfies, “look what I had for lunch,” OOTD…). My two weeks with the Hasselblad was a kind reminder of photography as a methodical art form, rather than a spray and pray hobby. Reviewed kit runs $15,940, pre-taxed: : $8,995.00 (currently on sale at BH for $6,495.00) : $3,750.00 :” $3,195.00
Fear not, citizens — the law enforcement apparatus of California has apprehended or is hot on the trail of more than a dozen hardened criminals who boldly stole from the state’s favorite local business: . Their unconscionable larceny amounted to more than a million dollars’ worth of devices stolen from Apple Stores — the equivalent of hundreds of iPhones. The alleged thieves would wear hoodies into Apple stores — already suspicious, I know — and there they would snatch products on display and hide them in the ample pockets of those garments. Truly cunning. These crimes took place in 19 different counties in California, the police forces of which all collaborated to bring the perpetrators to justice, though the San Luis Obispo and Oakland departments led the charge. So far seven of the thieves have been arrested, and nine more have warrants out. In a press release, California Attorney General Xavier Becerra regarding the dangers of the criminal element: Organized retail thefts cost California business owners millions and expose them to copycat criminals. Ultimately, consumers pay the cost of this merchandise hijacking. We will continue our work with local law enforcement authorities to extinguish this mob mentality and prosecute these criminals to hold them accountable. You hear that, would-be copycats? You hear that, assembling mob? Xavier’s gonna give it to you… if you don’t fly straight and stop trying to stick ordinary consumers with the costs of your crimes. Not to mention California businesses. With Apple paying that $15 billion in back taxes, it doesn’t have a lot of cash to spare for these shenanigans. Well, I suppose it’s doing . I’ve asked Apple for comment on this case and whether they participated or cooperated in it. Perhaps Face ID helped.
Archaeology may not be the most likely place to find the latest in technology — AI and robots are of dubious utility in the painstaking fieldwork involved — but has proven transformative. The latest accomplishment using laser-based imaging maps thousands of square kilometers of an ancient Mayan city once millions strong, but the researchers make it clear that there’s no technological substitute for experience and a good eye. The Pacunam Lidar Initiative began two years ago, bringing together a group of scholars and local authorities to undertake the largest yet survey of a protected and long-studied region in Guatemala. Some 2,144 square kilometers of the Maya Biosphere Reserve in Petén were scanned, inclusive of and around areas known to be settled, developed, or otherwise of importance. Preliminary imagery and data illustrating the success of the project were announced earlier this year, but the researchers have now performed their actual analyses on the data, and the resulting paper summarizing their wide-ranging results has been . The areas covered by the initiative, as you can see, spread over perhaps a fifth of the country. “We’ve never been able to see an ancient landscape at this scale all at once. We’ve never had a dataset like this. But in February really we hadn’t done any analysis, really, in a quantitative sense,” co-author Francisco Estrada-Belli, of Tulane University, told me. He worked on the project with numerous others, including his colleagues Marcello Canuto and Stephen Houston. “Basically we announced we had found a huge urban sprawl, that we had found agricultural features on a grand scale. After another 9 months of work we were able to quantify all that and to get some numerical confirmations for the impressions we’d gotten.” “It’s nice to be able to confirm all our claims,” he said. “They may have seemed exaggerated to some.” The lidar data was collected not by self-driving cars, which seem to be the only vehicles bearing lidar we ever hear about, nor even by drones, but by traditional airplane. That may sound cumbersome, but the distances and landscapes involved permitted nothing else. “A drone would never have worked — it could never have covered that area,” Estrada-Belli explained. “In our case it was actually a twin engine plane flown down from Texas.” The plane would made dozens of passes over a given area, a chosen “polygon” perhaps 30 kilometers long and 20 wide. Mounted underneath was “a Teledyne Optech Titan MultiWave multichannel, multi-spectral, narrow-pulse width lidar system,” which pretty much says it all: this is a heavy duty instrument, the size of a refrigerator. But you need that kind of system to pierce the canopy and image the underlying landscape. The many overlapping passes were then collated and calibrated into a single digital landscape of remarkable detail. “It identified features that I had walked over — a hundred of times!” he laughed. “Like a major causeway, I walked over it, but it was so subtle, and it was covered by huge vegetation, underbrush, trees, you know, jungle — I’m sure that in another 20 years I wouldn’t have noticed it.” But these structures don’t identify themselves. There’s no computer labeling system that looks at the 3D model and says, “this is a pyramid, this is a wall,” and so on. That’s a job that only archaeologists can do. “It actually begins with manipulating the surface data,” Estrada-Belli said. “We get these surface models of the natural landscape; each pixel in the image is basically the elevation. Then we do a series of filters to simulate light being projected on it from various angles to enhance the relief, and we combine these visualizations with transparencies and different ways of sharpening or enhancing them. After all this process, basically looking at the computer screen for a long time, then we can start digitizing it.” “The first step is to visually identify features. Of course, pyramids are easy, but there are subtler features that, even once you identify them, it’s hard to figure out what they are.” The lidar imagery revealed, for example, lots of low linear features that could be man-made or natural. It’s not always easy to tell the difference, but context and existing scholarship fill in the gaps. “Then we proceeded to digitize all these features… there were 61,000 structures, and everything had to be done manually,” Estrada-Belli said — in case you were wondering why it took nine months. “There’s really no automation because the digitizing has to be done based on experience. We looked into AI, and we hope that maybe in the near future we’ll be able to apply that, but for now an experienced archaeologist’s eye can discern the features better than a computer.” You can see the density of the annotations on the maps. It should be noted that many of these features had by this point been verified by field expeditions. By consulting existing maps and getting ground truth in person, they had made sure that these weren’t phantom structures or wishful thinking. “We’re confident that they’re all there,” he told me. [gallery ids="1721959,1721960,1721957,1721961,1721958"] “Next is the quantitative step,” he continued. “You measure the length and the areas and you put it all together, and you start analyzing them like you’d analyze other dataset: the structure density of some area, the size of urban sprawl or agricultural fields. Finally we even figured a way to quantify the potential production of agriculture.” This is the point where the imagery starts to go from point cloud to academic study. After all, it’s well known that the Maya had a large city in this area; it’s been intensely studied for decades. But the Pacunam (which stands for Patrimonio Cultural y Natural Maya) study was meant to advance beyond the traditional methods employed previously. “It’s a huge dataset. It’s a huge cross section of the Maya lowlands,” Estrada-Belli said. “Big data is the buzzword now, right? You truly can see things that you would never see if you only looked at one site at a time. We could never have put together these grand patterns without lidar.” “For example, in my area, I was able to map 47 square kilometers over the course of 15 years,” he said, slightly wistfully. “And in two weeks the lidar produced 308 square kilometers, to a level of detail that I could never match.” As a result the paper includes all kinds of new theories and conclusions, from population and economy estimates, to cultural and engineering knowledge, to the timing and nature of conflicts with neighbors. The resulting report doesn’t just advance the knowledge of Mayan culture and technology, but the science of archaeology itself. It’s iterative, of course, like everything else — Estrada-Belli noted that they were inspired by work done by colleagues in Belize and Cambodia; their contribution, however, exemplifies new approaches to handling large areas and large datasets. The more experiments and field work, the more established these methods will become, and the greater they will be accepted and replicated. Already they have proven themselves invaluable, and this study is perhaps the best example of lidar’s potential in the field. “We simply would not have seen these massive fortifications. Even on the ground, many of their details remain unclear. Lidar makes most human-made features clear, coherent, understandable,” explained co-author Stephen Houston (also from Tulane) in an email. “AI and pattern recognition may help to refine the detection of features, and drones may, we hope, bring down the cost of this technology.” “These technologies are important not only for discovery, but also for conservation,” pointed out co-author Thomas Garrison in an email. “3D scanning of monuments and artifacts provide detailed records and also allow for the creation of replicas via 3D printing.” Lidar imagery can also show the extent of looting, he wrote, and help cultural authorities provide against it by being aware of relics and sites before the looters are. The researchers are already planning a second, even larger set of flyovers, founded on the success of the first experiment. Perhaps by the time the initial physical work is done the trendier tools of the last few years will make themselves applicable. “I doubt the airplanes are going to get less expensive but the instruments will be more powerful,” Estrada-Belli suggested. “The other line is the development of artificial intelligence that can speed up the project; at least it can rule out areas, so we don’t waste any time, and we can zero in on the areas with the greatest potential.” He’s also excited by the idea of putting the data online so citizen archaeologists can help pore over it. “Maybe they don’t have the same experience we do, but like artificial intelligence they can certainly generate a lot of good data in a short time,” he said. But as his colleagues point out, even years in this line of work are necessarily preliminary. “We have to emphasize: it’s a first step, leading to innumerable ideas to test. Dozens of doctoral dissertations,” wrote Houston. “Yet there must always be excavation to look under the surface and to extract clear dates from the ruins.” “Like many disciplines in the social sciences and humanities, archaeology is embracing digital technologies. Lidar is just one example,” wrote Garrison. “At the same time, we need to be conscious of issues in digital archiving (particularly the problem of obsolete file formatting) and be sure to use technology as a complement to, and not a replacement for methods of documentation that have proven tried and true for over a century.” The researchers’ paper was published today in Science; you can learn about their conclusions (which are of more interest to the archaeologists and anthropologists among our readers) there, and follow other work being undertaken by the Fundación Pacunam .
If you’re familiar with 20th century Soviet camera clones you’ll probably be familiar with . Created by Krasnogorsky Zavod, the Nikon/Leica clones were a fan favorite behind the Iron Curtain and, like the Lomo, was a beloved brand that just doesn’t get its due. The firm stopped making cameras in 2005 but in its long history it defined Eastern European photography for decades and introduced the rifle-like Photo Sniper camera looked like something out of James Bond. Now, thanks to a partnership with Zenit is back and is here to remind you that in Mother Russia, picture takes you. The camera is based on the Leica M Type 240 platform but has been modified to look and act like an old Zenit. It comes with a Zenitar 35 mm f/1.0 lens that is completely Russian-made. You can use it for bokeh and soft-focus effects without digital processing. The Leica M platform offers a 24MP full-frame CMOS sensor, a 3-inch LCD screen, HD video recording, live view focusing, a 0.68x viewfinder, ISO 6400, and 3fps continuous shooting. It will be available this year in the US, Europe, and Russia. How much does the privilege of returning to the past cost? An estimated $5,900-$7,000 if previous incarnations of the Leica M are any indication. I have a few old film Zenits lying around the house, however. I wonder I can stick in some digital guts and create the ultimate Franken-Zenit?
While this video shows a tiny robot from the City University of Hong Kong doing what amounts to a mitzvah, we can all imagine a future in which this little fellow could stab you in the kishkes. This wild little robot uses electromagnetic force to swim or flop back and forth to pull itself forward through harsh environments. Researchers can remotely control it from outside of the body. “Most animals have a leg-length to leg-gap ratio of 2:1 to 1:1. So we decided to create our robot using 1:1 proportion,” said Dr. Shen Yajing of CityU’s Department of Biomedical Engineering. The legs are .65 mm long and pointed, reducing friction. The robot is made of “silicon material called polydimethylsiloxane (PDMS) embedded with magnetic particles which enables it to be remotely controlled by applying electromagnetic force.” It can bend almost 90 degrees to climb over obstacles. The researchers have sent the little fellow through multiple rough environments including this wet model of a stomach. It can also carry medicines and drop them off as needed. “The rugged surface and changing texture of different tissues inside the human body make transportation challenging. Our multi-legged robot shows an impressive performance in various terrains and hence open wide applications for drug delivery inside the body,” said Professor Wang Zuankai. The team hopes to create a biodegradable robot in the future which would allow the little fellow to climb down your esophagus and into your guts and then, when it has dropped its payload, dissolve into nothingness or come out your tuchus.
The new iPhones have excellent cameras, to be sure. But it’s always good to verify Apple’s with first-hand reports. We have of the phones and their photography systems, but teardowns provide the invaluable service of letting you see the biggest changes with your own eyes — augmented, of course, by a high-powered microscope. We’ve already seen of the phone, but to the device’s components — including the improved camera of the iPhone XS and XS Max. Although the optics of the new camera are as far as we can tell unchanged since the X, the sensor is a new one and is worth looking closely at. Microphotography of the sensor die show that Apple’s claims are borne out and then some. The sensor size has increased from 32.8mm2 to 40.6mm2 — a huge difference despite the small units. Every tiny bit counts at this scale. (For comparison, the Galaxy S9 is 45mm2, and the soon-to-be-replaced Pixel 2 is 25mm2.) The pixels themselves also, as advertised, grew from 1.22 microns (micrometers) across to 1.4 microns — which should help with image quality across the board. But there’s an interesting, subtler development that has continually but quietly changed ever since its introduction: the “focus pixels.” That’s Apple’s brand name for phase detection autofocus (PDAF) points, found in plenty of other devices. The basic idea is that you mask off half a sub-pixel every once in a while (which I guess makes it a sub-sub-pixel), and by observing how light enters these half-covered detectors you can tell whether something is in focus or not. Of course, you need a bunch of them to sense the image patterns with high fidelity, but you have to strike a balance: losing half a pixel may not sound like much, but if you do it a million times, that’s half a megapixel effectively down the drain. Wondering why all the PDAF points are green? Many camera sensors use an “RGBG” sub-pixel pattern, meaning there are two green sub-pixels for each red and blue one — it’s complicated why. But there are twice as many green sub-pixels and therefore the green channel is more robust to losing a bit of information.Apple introduced PDAF in the iPhone 6, but as you can see in TechInsights’ great diagram, the points are pretty scarce. There’s one for maybe every 64 sub-pixels, and not only that, they’re all masked off in the same orientation: either the left or right half gone. The 6S and 7 Pluses saw the number double to one PDAF point per 32 sub-pixels. , the number is improved to one per 20 — but there’s another addition: now the phase detection masks are on the tops and bottoms of the sub-pixels as well. As you can imagine, doing phase detection in multiple directions is a more sophisticated proposal, but it could also significantly improve the accuracy of the process. Autofocus systems all have their weaknesses, and this may have addressed one Apple regretted in earlier iterations. Which brings us to the XS (and Max, of course), in which the PDAF points are now one per 16 sub-pixels, having increased the frequency of the vertical phase detection points so that they’re equal in number to the horizontal one. Clearly the experiment paid off and any consequent light loss has been mitigated or accounted for. I’m curious how the sub-pixel patterns of Samsung, Huawei and Google phones compare, and I’m looking into it. But I wanted to highlight this interesting little evolution. It’s an interesting example of the kind of changes that are hard to understand when explained in simple number form — we’ve doubled this, or there are a million more of that — but which make sense when you see them in physical form.
is underway in London and the theme of the show is “large.” Unusually for an industry that is trending towards the compact, the cameras on stage at this show sport big sensors, big lenses, and big price tags. But though they may not be for the average shooter, these cameras are impressive pieces of hardware that hint at things to come for the industry as a whole. The most exciting announcement is perhaps that from which surprised everyone with the S1 and S1R, a pair of not-quite-final full frame cameras that aim to steal a bit of the thunder from Canon and Nikon’s entries into the mirrorless full frame world. Panasonic’s cameras have generally had impressive video performance, and these are no exception. They’ll shoot 4K at 60 FPS, which in a compact body like that shown is going to be extremely valuable to videographers. Meanwhile the S1R, with 47 megapixels to the S1’s 24, will be optimized for stills. Both will have dual card slots (which Canon and Nikon declined to add to their newest gear), weather sealing, and in-body image stabilization. The timing and inclusion of so many desired features indicates either that Panasonic was clued in to what photographers wanted all along, or they waited for the other guys to move and then promised the things their competitors wouldn’t or couldn’t. Whatever the case, the S1 and S1R are sure to make a splash, whatever their prices. Panasonic was also part of an announcement that may have larger long-term implications: a lens mount collaboration with and Sigma aimed at maximum flexibility for the emerging mirrorless full-frame and medium format market. L-mount lenses will work on any of the group’s devices (including the S1 and S1R) and should help promote usage across the board. Leica, for its part, announced the S3, a new version of its medium format S series that switches over to the L-mount system as well as bumping a few specs. No price yet but if you have to ask, you probably can’t afford it. Sigma had no camera to show, but announced it would be taking its Foveon sensor tech to full frame and that upcoming bodies would be using the L mount as well. This Fuji looks small here, but it’s no lightweight. It’s only small in comparison to previous medium format cameras. Fujifilm made its own push on the medium format front with the new GFX 50R, which sticks a larger than full frame (but smaller than “traditional” medium format) sensor inside an impressively small body. That’s not to say it’s insubstantial: Fuji’s cameras are generally quite hefty, and the 50R is no exception, but it’s much smaller and lighter than its predecessor and, surprisingly, costs $2,000 less at $4,499 for the body. The theme, as you can see, is big and expensive. But the subtext is that these cameras are not only capable of extraordinary imagery, but they don’t have to be enormous to do it. This combination of versatility with portability is one of the strengths of the latest generation of cameras, and clearly Fuji, Panasonic and Leica are eager to show that it extends to the pro-level, multi-thousand dollar bodies as well as the consumer and enthusiast lineup.
If you’ve been keeping up with watchmaker you’ll be familiar with their Horological Machine series, watches that are similar in construction but wildly differ when it comes to design. This watch, the HM9, is called the Flow and hearkens back to roadsters, jets, and 1950s space ships. [gallery ids="1719823,1719822,1719821"] The watch, limited to a run of 33 pieces, shows the time on a small forward-facing face in one of the cones. The other two cones contain dual balance wheels. The balance wheel is what causes the watch to tick and controls the energy released by the main spring. Interestingly, MB&F added two to this watch in an effort to ensure accuracy. “The twin balance wheels of the HM9 engine feed two sets of chronometric data to a central differential for an averaged reading,” they wrote. “The balances are individually impulsed and spatially separated to ensure that they beat at their own independent cadences of 2.5Hz (18,000bph) each. This is important to ensure a meaningful average, just as how a statistically robust mathematical average should be derived from discrete points of information.” There are two versions called the Road and Air and they cost a mere $182,000 (tax not included.) Considering nearly every piece of this is made by hand – from the case to the curved crystal to the intricate movement – you’re essentially paying a team of craftsman a yearly wage just to build your watch. While it’s no Apple Watch, the MB&F HM9 is a unique and weird little timepiece. While it’s obviously not for everyone, with enough cash and a little luck you can easily join a fairly exclusive club of HM9 owners.
Whether you’re trying to figure out how many students are attending your lectures or how many evil aliens have taken your Space Force brethren hostage, Wi-Fi can now be used to count them all. The system, created by researchers at UC Santa Barbara, uses a single Wi-Fi router outside of the room to measure attenuation and signal drops. From the release: The transmitter sends a wireless signal whose received signal strength (RSSI) is measured by the receiver. Using only such received signal power measurements, the receiver estimates how many people are inside the room — an estimate that closely matches the actual number. It is noteworthy that the researchers do not do any prior measurements or calibration in the area of interest; their approach has only a very short calibration phase that need not be done in the same area. This means that you could simply walk up to a wall and press a button to count, with a high degree of accuracy, how many people are walking around. The system can measure up to 20 people in its current form. The system uses a mathematical model to “see” people in the room based on signal strength and attenuation. The system uses off-the-shelf components and they’ve tested it in multiple locations and found that their total accuracy is two people or less with only one Wi-Fi device nearby. Bodies and objects essentially absorb Wi-Fi as they move around in rooms, allowing the system to find discrete things in the space. Sadly it can’t yet map their position in the room, a feature that could be even more helpful in the future.
Dust off your old Bose 501 speakers. New devices are coming that will give traditional audio equipment a voice. recently and among the lot are several small, diminutive add-ons. These models did not have a smart speaker built into the devices but rather turned other speakers into smart speakers. has a similar device. Called the Sonos Amp, the device connects the service to audio receivers and can drive traditional speakers. There’s a new version coming out in 2019 that adds Alexa and AirPlay 2. This movement back towards traditional speaker systems could be a boon for audio companies reeling from the explosion of smart speakers. Suddenly, consumers do not have to choose between the ease of use in an inexpensive smart speaker and the vastly superior audio quality of a pair of high-end speakers. Consumers can have voice services and listen to Cake, too. Echo devices are everywhere in my house. They’re in three bedrooms, my office, our living room, my workshop and outside on the deck. But besides the Tap in the workshop and Echo in the kitchen, every Echo is connected to an amp and speakers. For instance, in my office, I have an Onkyo receiver and standalone Onkyo amp that powers a pair of Definitive Technology bookshelf speakers. The bedrooms have various speakers connected to older A/V receivers. Outside there’s a pair of speakers powered by cheap mini-amp. Each system sounds dramatically better than any smart speaker available. There’s a quiet comfort in building an audio system: To pick out each piece and connect everything; to solder banana clips to speaker wire and ensure the proper power is flowing to each speaker. Amazon and built one of the best interfaces for audio in Alexa and Google Assistant. But that could change in the future. In the end, Alexa and Google Assistant are just another component in an audio stack, and to some consumers, it makes sense to treat them as a turntable or equalizer — a part that can be swapped out in the future. The world of consumer electronics survives because of the disposable nature of gadgets. There’s always something better coming soon. Cell phones last a couple of years and TVs last a few years longer. But bookshelf speakers purchased today will still sound great in 20 years. There’s a thriving secondary market for vintage audio equipment, and unlike old computer equipment, buyers want this gear actually to use it. If you see a pair of giant Bose speakers at a garage sale, buy them and use them. Look at the prices for used Bose 901 speakers: they’re the cost of three Apple HomePods. Look at ShopGoodwill.com — Goodwill’s fantastic auction site. It’s filled with vintage audio equipment with some pieces going for multiple thousands of dollars. Last year’s smart speakers are on there, too, available for pennies on the dollar. For the most part, audio equipment will last generations. Speakers can blow and wear out. Amps can get hit by surges and components can randomly fail. It happens, but most of the time, speakers survive. This is where Amazon and Sonos come in. Besides selling standalone speakers, both companies have products available that adds services to independent speaker systems. A person doesn’t have to ditch their Pioneer stack to gain access to Alexa. They have to plug in a new component, and in the future, if something better is available, that component can be swapped out for something else. Amazon first introduced this ability in the little Echo Dot. The $50 speaker has a 3.5mm output that makes it easy to add to a speaker system. A $35 version is coming soon that lacks the speaker found in the Dot and features a 3.5mm output. It’s set to be the easiest and cheapest way to add voice services to speakers. Amazon and Sonos also have higher-end components nearing release. The Amazon Echo Link features digital and discrete audio outputs that should result in improved audio. The Amazon Echo Amp adds an amplifier to power a set of passive speakers directly. Sonos offers something similar in the upcoming Sonos Amp with 125 watts per channel and HDMI to allow it to be connected to a TV. These add-on products give consumers dramatically more options than a handful of plastic smart speakers. There are several ways to take advantage of these components. The easiest is to look at powered speakers. These speakers have built-in amplifiers and unlike traditional speakers, plug into an outlet for power. Look at models from Klipsch or Yamaha. Buyers just need to connect a few cables to have superior sound to most smart speakers. Another option is to piece together a component system. Pick any A/V receiver and add a couple of speakers and a subwoofer. This doesn’t have to be expensive. Small $30 amps like from Lepy or Pyle can drive a set of speakers — that’s what I use to drive outdoor speakers. Or, look at Onkyo or Denon A/V surround sound receivers and build a home theater system and throw in an Amazon Echo Link on top. As for speakers Polk, Klipsch, Definitive Technology, KEF, B&W, and many more produce fantastic speakers that will still work years after Amazon stops making Echo devices. Best of all, both options are modular and allows owners to modify the system overtime. Want to add a turntable? Just plug it in. That’s not possible with a Google Home. Technology doesn’t have to be disposable. These add-on products offer the same solution as Roku or Fire TV devices — just plug in this device to add new tricks to old gear. When it gets old, don’t throw out the TV (or in this case speakers), just plug in the latest dongle. Sure, it’s easy to buy a Google Home Max, and the speaker sounds great, too. For some people, it’s the perfect way to get in their living space. It’s never been easier to listen to music or NPR. There are a few great options for smart speakers. The $350 Apple HomePod sounds glorious though Siri lacks a lot of smarts of Alexa or Google Assistant. I love the Echo Dot for its utility and price point, and in a small space, it sounds okay. For my money, the best smart speaker is the Sonos One. It sounds great, is priced right, and Sonos has the best ecosystem available. I’m excited about Amazon’s Echo and Sub bundle. For $249, buyers get two Echos and the new Echo Sub. The software forces the two Echos to work in stereo while the new subwoofer supplements the low-end. I haven’t heard the system yet, but I expect it to sound as good as the Google Home Max or Apple HomePod and the separate component operation should help the audio fill larger spaces. Sonos has similar systems available. The fantastic Sonos One speaker can be used as a standalone speaker, part of a multiroom system, or as a surround speaker with other Sonos One speakers and the Sonos Beam audio bar. To me, Sonos is compelling because of their ecosystem and tendency to have a longer product refresh cycle. In the past, Sonos has been much slower to roll out new products but instead added services to existing products. The company seems to respect the owners of its products rather than forcing them to buy new products to gain new abilities. In the end, though, smart speakers from Apple, Sonos, Google or Amazon will stop working. Eventually, the company will stop supporting the services powering the speakers and owners will throw the speakers in the trash. It’s depressing in the same way Spotify is depressing. Your grandkids are not going to dig through your digital Spotify milk crate. When the service is gone, the playlists are gone. That’s the draw of component audio equipment. A turntable purchased in the ’70s could still work today. Speakers bought during the first dot-com boom will still pound when the cryptocurrency bubble pops. As for Amazon Alexa and Google Assistant, to me, it makes sense to treat it as another component in a larger system and enjoy it while it lasts.
This wild, 3D-printed self-solving Rubik’s cube is amazing. To make it work, a Japanese inventor used servo motors and Arduino boards to actuate the cube as it solves itself. Sadly, there isn’t much of a build description available but it looks to be very compact and surprisingly fast. There is a and you can watch the little cube scoot around a table as it solves itself in less than a minute. The creator also built the , a cute system for controlling a human as they walk down the street, and the which is equally inexplicable. If this Ru-bot is real and ready for prime time it could be an amazing Kickstarter.
It’s been 10 years since took the wraps off the G1, the first phone. Since that time the OS has grown from buggy, nerdy iPhone alternative to arguably the most popular (or at least populous) computing platform in the world. But it sure as heck didn’t get there without hitting a few bumps along the road. Join us for a brief retrospective on the last decade of Android devices: the good, the bad, and the Nexus Q. HTC G1 (2008) This is the one that started it all, and I have in my heart for the old thing. Also known as the HTC Dream — this was back when we had an you see — the G1 was about as as you can imagine. Its full keyboard, trackball, slightly janky slide-up screen (crooked even in official photos), and considerable girth marked it from the outset as a phone only a real geek could love. Compared to the iPhone, it was like a poorly dressed whale. But in time its half-baked software matured and its idiosyncrasies became apparent for the smart touches they were. To this day I occasionally long for a trackball or full keyboard, and while the G1 wasn’t pretty, it was tough as hell. Moto Droid (2009) Of course, most people didn’t give Android a second look until Moto came out with the Droid, a slicker, thinner device from the maker of the famed RAZR. In retrospect, the Droid wasn’t that much better or different than the G1, but it was thinner, had a better screen, and had the benefit of an enormous from Motorola and Verizon. (Disclosure: Verizon owns Oath, which owns TechCrunch, but this doesn’t affect our coverage in any way.) For many, the Droid and its immediate descendants were the first Android phones they had — something new and interesting that blew the likes of Palm out of the water, but also happened to be a lot cheaper than an iPhone. HTC/Google Nexus One (2010) This was the fruit of the continued collaboration between Google and HTC, and the first phone Google branded and sold itself. was meant to be the slick, high-quality device that would finally compete toe-to-toe with the iPhone. It ditched the keyboard, got a , and had a lovely smooth design. Unfortunately it ran into two problems. First, the Android ecosystem was beginning to get crowded. People had lots of choices and could pick up phones for cheap that would do the basics. Why lay the cash out for a fancy new one? And second, would shortly release the iPhone 4, which — and I was an Android fanboy at the time — objectively blew the Nexus One and everything else out of the water. Apple had brought a gun to a knife fight. HTC Evo 4G (2010) Another HTC? Well, this was prime time for the . They were taking risks no one else would, and the Evo 4G was no exception. It was, for the time, huge: the iPhone had a 3.5-inch screen, and most Android devices weren’t much bigger, if they weren’t smaller. somehow survived our criticism (our alarm now seems extremely quaint, given the size of the average phone now) and was a reasonably popular phone, but ultimately is notable not for breaking sales records but breaking the seal on the idea that a phone could be big and still make sense. (Honorable mention goes to the Droid X.) Samsung Galaxy S (2010) Samsung’s big debut made a hell of a splash, with custom versions of the phone appearing in the stores of practically every carrier, each with their own name and design: the AT&T Captivate, Vibrant, Verizon Fascinate, and Sprint Epic 4G. As if the Android lineup wasn’t confusing enough already at the time! Though the S was a solid phone, it wasn’t without its flaws, and the iPhone 4 made for very tough competition. But strong sales reinforced Samsung’s commitment to the platform, and the Galaxy series is still going strong today. Motorola Xoom (2011) This was an era in which Android devices were responding to Apple, and not vice versa as we find today. So it’s no surprise that hot on the heels of the original iPad we found Google pushing a tablet-focused version of Android with its partner which volunteered to be the guinea pig with its short-lived Xoom tablet. Although there are still Android tablets on sale today, the Xoom represented a dead end in development — an attempt to carve a piece out of a market Apple had essentially invented and soon dominated. Android tablets from Motorola, HTC, Samsung and others were rarely anything more than adequate, though they . This illustrated the impossibility of “leading from behind” and prompted device makers to specialize rather than participate in a commodity hardware melee. Amazon Kindle Fire (2011) And who better to illustrate than Its contribution to the Android world was the, which differentiated themselves from the rest by being extremely cheap and directly focused on consuming digital media. Just $200 at launch and far less later, the Fire devices catered to the regular Amazon customer whose kids were pestering them about getting a tablet on which to play Fruit Ninja or Angry Birds, but who didn’t want to shell out for an iPad. Turns out this was a wise strategy, and of course one Amazon was uniquely positioned to do with its huge presence in online retail and the ability to subsidize the price out of the reach of competition. Fire tablets were never particularly good, but they were good enough, and for the price you paid, that was kind of a miracle. Xperia Play (2011) has always had a hard time with Android. Its line of phones for years were considered competent — I owned a few myself — and arguably industry-leading in the camera department. But no one bought them. And the one they bought the least of, or at least proportional to the hype it got, has to be . This thing was supposed to be a mobile gaming platform, and the idea of a slide-out keyboard is great — but the whole thing basically cratered. What Sony had illustrated was that you couldn’t just piggyback on the popularity and diversity of Android and launch whatever the hell you wanted. Phones didn’t sell themselves, and although the idea of playing Playstation games on your phone might have sounded cool to a few nerds, it was never going to be enough to make it a million-seller. And increasingly that’s what phones needed to be. Samsung Galaxy Note (2012) As a sort of natural climax to the swelling phone trend, Samsung went all out with the first true “phablet,” and despite groans of protest the phone not only sold well but became a staple of the Galaxy series. In fact, it wouldn’t be long before Apple would follow on and produce a Plus-sized phone of its own. The Note also represented a step towards using a phone for serious productivity, not just everyday smartphone stuff. It wasn’t entirely successful — Android just wasn’t ready to be highly productive — but in retrospect it was forward thinking of Samsung to make a go at it and begin to establish productivity as a core competence of the Galaxy series. Google Nexus Q (2012) This abortive effort by Google to spread Android out into a platform was part of a number of ill-considered choices at the time. , apparently at Google or anywhere elsewhere in the world, what this thing was supposed to do. I still don’t. As we wrote at the time: Here’s the problem with the it’s a stunningly beautiful piece of hardware that’s being let down by the software that’s supposed to control it. It was made, or rather nearly made in the USA, though, so it had that going for it. HTC First — “The Facebook Phone” (2013) The First got dealt a bad hand. The phone itself was a lovely piece of hardware with an understated design and bold colors that stuck out. But its default launcher, the doomed Home, was hopelessly bad. How bad? in April, in May. I remember visiting an AT&T store during that brief period and even then the staff had been instructed in how to disable Facebook’s launcher and reveal the perfectly good phone beneath. The good news was that there were so few of these phones sold new that the entire stock started selling for peanuts on Ebay and the like. I bought two and used them for my early experiments in ROMs. No regrets. HTC One/M8 (2014) This was the beginning of the end for HTC, but their last few years saw them update their design language to something that actually rivaled Apple. were good phones, though HTC oversold the “Ultrapixel” camera, which turned out to not be that good, let alone iPhone-beating. As Samsung increasingly dominated, Sony plugged away, and and Chinese companies increasingly entered the fray, HTC was under assault and even a solid phone series like the One couldn’t compete. 2014 was a transition period with old manufacturers dying out and the dominant ones taking over, eventually leading to the market we have today. Google/LG Nexus 5X and Huawei 6P (2015) This was the line that brought Google into the hardware race in earnest. After the bungled Nexus Q launch, Google needed to come out swinging, and they did that by marrying their more pedestrian hardware with some software that truly zinged. Android 5 was a dream to use, Marshmallow had features that we loved … and the phones became objects that we adored. . This was when Google took its phones to the next level and never looked back. Google Pixel (2016) If the Nexus was, in earnest, the starting gun for Google’s entry into the hardware race, the line could be its victory lap. It’s an honest-to-god competitor to the Apple phone. Gone are the days when Google is playing catch-up on features to Apple, instead, Google’s a contender in its own right. The phone’s camera is amazing. The software works relatively seamlessly (bring back guest mode!), and phone’s size and power are everything anyone could ask for. The sticker price, like Apple’s newest iPhones, is still a bit of a shock, but this phone is the teleological endpoint in the Android quest to rival its famous, fruitful, contender. Let’s see what the next ten years bring.
With the , introduced a new, larger display. It now has rounded edges and thinner bezels. And the company took advantage of that display to introduce new fire, water, liquid metal and vapor faces. Apple didn’t use CGI to create those faces — they shot those faces in a studio. Many companies would have rendered those effects on a computer given the size of the display. But those are actual videos shot with a camera. shared a video of the actual process, and it’s insane: As you can see, Apple used a flamethrower against a transparent surface, exploded a balloon at the top of a basin of water, made a color powder explosion in a cylinder and rotated a small puddle of metallic liquid. It says a lot about Apple’s design culture — they don’t take shortcuts and they have a lot of money. Here’s the introduction video for the new Apple Watch:
Autonomous vehicles and robots have to know how to get from A to B without hitting obstacles or pedestrians — but how can they do so politely and without disturbing nearby humans? That’s what Stanford’s Jackrabbot project aims to learn, and now a redesigned robot will be cruising campus learning the subtleties of humans negotiating one another’s personal space. “There are many behaviors that we humans subconsciously follow – when I’m walking through crowds, I maintain personal distance or, if I’m talking with you, someone wouldn’t go between us and interrupt,” said grad student Ashwini Pokle in a Stanford News release. “We’re working on these deep learning algorithms so that the robot can adapt these behaviors and be more polite to people.” Of course there are practical applications pertaining to last mile problems and robotic delivery as well. What do you do if someone stops in front of you? What if there’s a group running up behind? Experience is the best teacher, as usual. , and has been hard at work building a model of how humans (well, mostly undergrads) walk around safely, avoiding one another while taking efficient paths, and signal what they’re doing the whole time. But technology has advanced so quickly that . The JackRabbot project team with JackRabbot 2 (from left to right): Patrick Goebel, Noriaki Hirose, Tin Tin Wisniewski, Amir Sadeghian, Alan Federman, Silivo Savarese, Roberto Martín-Martín, Pin Pin Tea-mangkornpan and Ashwini Pokle The new robot has a vastly improved sensor suite compared to its predecessor: two Velodyne lidar units giving 360 degree coverage, plus a set of stereo cameras making up its neck that give it another depth-sensing 360 degree view. The cameras and sensors on its head can also be pointed wherever needed, of course, just like ours. All this imagery is collated by a pair of new GPUs in its base/body. Amir Sadeghian, one of the researchers, said this makes Jackrabbot 2 “one of the most powerful robots of its size that has ever been built.” This will allow the robot to sense human motion with a much greater degree of precision than before, and also operate more safely. It will also give the researchers a chance to see how the movement models created by the previous robot integrate with this new imagery. The other major addition is a totally normal-looking arm that Jackrabbot 2 can use to gesture to others. After all, we do it, right? When it’s unclear who should enter a door first or what side of a path they should take, a wave of the hand is all it takes to clear things up. Usually. Hopefully this kinked little gripper accomplishes the same thing. Jackrabbot 2 can zoom around for several hours at a time, Sadeghian said. “At this stage of the project for safety we have a human with a safety switch accompanying the robot, but the robot is able to navigate in a fully autonomous way.” Having working knowledge of how people use the space around them and how to predict their movements will be useful to startups like Kiwi, Starship, and Marble. The first time a delivery robot smacks into someone’s legs is the last time they consider ordering something via one.
A security researcher has published details of a vulnerability in a popular cloud storage drive after the company failed to issue security patches for over a year. found in Western Digital’s My Cloud devices, which he said allows an attacker to bypass the admin password on the drive, gaining “complete control” over the user’s data. The exploit works because drive’s web-based dashboard doesn’t properly check a user’s credentials before giving a possible attacker access to tools that should require higher levels of access. The bug was “easy” to exploit, Vermeulen told TechCrunch in an email, and that it was remotely exploitable if a My Cloud device allows remote access over the internet — which . He posted on Twitter. Details of the bug were also independently found , which released . Vermeulen reported the bug over a year ago in April 2017, but said the company stopped responding. Normally, security researchers give 90 days for a company to respond, in line with industry-accepted responsible disclosure guidelines. After he found that WD updated the My Cloud firmware in the meanwhile without fixing the vulnerability he found, he decided to post his findings. A year later, WD still hasn’t release a patch. The company confirmed that it knows of the vulnerability but did not say why it took more than a year to issue a fix. “We are in the process of finalizing a scheduled firmware update that will resolve the reported issue,” a spokesperson said, which will arrive “within a few weeks.” WD said that several of its My Cloud products are vulnerable — including the EX2, EX4, and Mirror, but not My Cloud Home. In the meantime, Vermeulen said that there’s no fix and that users have to “just disconnect” the drive altogether if they want to keep their data safe.
iOS 12 is still , but Apple is already testing iOS 12.1 with a developer beta version. and found references to a brand new iPad that would support Face ID. First, there are changes to Face ID. You can find references to in the iOS 12.1 beta. Face ID on the iPhone is limited to portrait orientation. Chances are you didn’t even notice this limitation because there’s only one orientation for the lock screen and home screen. But the iPad is a different story as people tend to use it in landscape. And even when you hold it in landscape, some people will have the home button on the left while others will have the home button on the right. In other words, in order to bring Face ID to the iPad, it needs to support multiple orientations. This beta indicates that iOS 12.1 could be the version of iOS that ships with the next iPad. If that wasn’t enough, there’s in the setup reference files. This device is called iPad2018Fall, which clearly means that a new iPad is right around the corner. Analyst Ming-Chi Kuo previously that the iPad Pro could switch from Lightning to USB-C. This would open up a ton of possibilities when it comes to accessories. For instance, you could plug an external monitor without any dongle and send a video feed to this external monitor. As for iPhone users, in addition to bug fixes, iOS 12.1 Group FaceTime, a feature that was removed at the last minute before the release of iOS 12. If it’s still too buggy, Apple could still choose to remove the feature once again. Memojis could support across your devices, which would be useful for an iPad Pro with Face ID.
Newegg is clearing up its website after a month-long data breach. Hackers injected 15 lines of card skimming code on the online retailer’s payments page which remained for more than a month between August 14 and September 18, Yonathan Klijnsma, a threat researcher at RiskIQ, told TechCrunch. The code siphoned off credit card data from unsuspecting customers to a server controlled by the hackers with a similar domain name — likely to avoid detection. The server even used an HTTPS certificate to blend in. The code also worked for both desktop and mobile customers — though it’s unclear if mobile customers are affected. The online electronics retailer removed the code on Tuesday after it was contacted by incident response firm Volexity, which first discovered the card skimming malware and . Newegg is one of the largest retailers in the US, making $2.65 billion in revenue in 2016. The company touts more than 45 million monthly unique visitors, but it’s not known precisely how many customers completed transactions during the period. When reached, a Newegg spokesperson did not immediately comment. Klijnsma called the incident “another well-disguised attack” that looked near-identical to . Like , RiskIQ attributed the Newegg credit card theft to the Magecart group, a collective of hackers that carry out targeted attacks against vulnerable websites. The code used in both skimming attacks was near identical, . “The breach of Newegg shows the true extent of Magecart operators’ reach,” said Klijnsma. “These attacks are not confined to certain geolocations or specific industries—any organization that processes payments online is a target.” Like previous card skimming campaigns, he said that the hackers “integrated with the victim’s payment system and blended with the infrastructure and stayed there as long as possible.” Anyone who entered their credit card data during the period should immediately contact their banks.
It didn’t hurt. I thought someone dropped a small cardboard box on my head. It felt sharp and light. I was sitting on the floor, along the back of the crowd and then an Shooting Star Mini drone dropped on my head. Audi put on , the e-tron. The automaker went all out, put journalists, executives and car dealers on a three-story paddle boat and sent us on a two-hour journey across San Francisco Bay. I had a beer and two dumplings. We were headed to a long-vacated Ford manufacturing plant in Richmond, CA. By the time we reached our destination, the sun had set and Audi was ready to begin. Suddenly, in front of the boat, Intel’s Shooting Star drones put on a show that ended with Audi’s trademark four ring logo. The show continued as music pounded inside the warehouse, and just before the reveal of the e-tron, Intel’s Shooting Star Minis celebrated the occasion with a light show a couple of feet above attendees’ heads. That’s when one hit me. Natalie Cheung, GM of Intel Drone Light Shows, told me they knew when one drone failed to land on its zone that one went rogue. According to Cheung, the Shooting Star Mini drones were designed with safety in mind. “The drone frame is made of flexible plastics, has prop guards, and is very small,” she said. “The drone itself can fit in the palm of your hand. In addition to safety being built into the drone, we have systems and procedures in place to promote safety. For example, we have visual observers around the space watching the drones in flight and communicating with the pilot in real-time. We have built-in software to regulate the flight paths of the drones.” After the crash, I assumed someone from Audi or Intel would be around to collect the lost drone, but no one did, and at the end of the show, I was unable to find someone who knew where I could find the Intel staff. I notified my Intel contacts first thing the following morning and provided a local address where they could get the drone. As of publication, the drone is still on my desk. I have covered Intel’s Shooting Star program . It’s a fascinating program and one of the most impressive uses of drones I’ve seen. The outdoor shows, which have been used at The Super Bowl and Olympics, are breathtaking. Hundreds of drones take to the sky and perform a seemingly impossible dance and then return home. A sophisticated program designates the route of each drone and GPS ensures each is where it’s supposed to be and it’s controlled by just one person. Intel launched at CES in 2018. The concept is the same, but these drones do not use GPS to determine their location. The result is something even more magical than the outside version because with the Shooting Star Minis, the drones are often directly above the viewers. It’s an incredible experience to watch drones dance several feet overhead. It feels slightly dangerous. That’s the draw. And that poses a safety concern. The drone that hit me is light and mostly plastic. It weighs very little and is about 6-inches by 4-inches. A cage surrounds the bottom of the rotors though not the top. If there’s a power button, I can’t find it. The full-size drones are made out of plastic and Styrofoam. Safety has always been baked into the Shooting Star programs but I’m not sure the current protocols are enough. I was seated on the floor along the back of the venue. Most of the attendees where standing, taking selfies with the performing drones. It was a lovely show. When the drone came down on my head, it tumbled onto the floor and the rotors continued to spin. A member of the catering staff was walking behind the barrier I was sitting against, reached out and touched the spinning rotors. I’m sure she’s fine, but when her finger touched the spinning rotor, she jumped in surprise. At this point, seconds after it crashed, the drone was upside down, and like an upturned beetle, continued to operate for a few seconds until the rotors shut off. To be clear, I was not hurt. And that’s not the point. Drone swarm technology is fascinating and could lead to incredible use cases. Swarms of drones could quickly and efficiently inspect industrial equipment and survey crops. And they make for great shows in outside venues. But are they ready to be used inside, above people’s heads? I’m already going bald. I don’t need help.