Why did last night’s ‘Game of Thrones’ look so bad? Here comes the science!

Why did last night’s ‘Game of Thrones’ look so bad? Here comes the science!

4:32pm, 29th April, 2019
Last night’s episode of “Game of Thrones” was a wild ride and inarguably one of an epic show’s more epic moments — if you could see it through the dark and the blotchy video. It turns out even one of the most expensive and meticulously produced shows in history can fall prey to the scourge of low quality streaming and bad TV settings. The good news is this episode is going to look amazing on Blu-ray or potentially in future, better streams and downloads. The bad news is that millions of people already had to see it in a way its creators surely lament. You deserve to know why this was the case. I’ll be simplifying a bit here because this topic is immensely complex, but here’s what you should know. (By the way, I can’t entirely avoid spoilers, but I’ll try to stay away from anything significant in words or images.) It was clear from the opening shots in last night’s episode, “The Longest Night,” that this was going to be a dark one. The army of the dead faces off against the allied living forces in the darkness, made darker by a bespoke storm brought in by, shall we say, a Mr. N.K., to further demoralize the good guys. If you squint you can just make out the largest army ever assembled Thematically and cinematographically, setting this chaotic, sprawling battle at night is a powerful creative choice and a valid one, and I don’t question the showrunners, director, and so on for it. But technically speaking, setting this battle at night, and in fog, is just about the absolute worst case scenario for the medium this show is native to: streaming home video. Here’s why. Compression factor Video has to be compressed in order to be sent efficiently over the internet, and although we’ve made enormous strides in video compression and the bandwidth available to most homes, there are still fundamental limits. The master video that HBO put together from the actual footage, FX, and color work that goes into making a piece of modern media would be huge: hundreds of gigabytes if not terabytes. That’s because the master has to include all the information on every pixel in every frame, no exceptions. Imagine if you tried to “stream” a terabyte-sized TV episode. You’d have to be able to download upwards of 200 megabytes per second for the full 80 minutes of this one. Few people in the world have that kind of connection — it would basically never stop buffering. Even 20 megabytes per second is asking too much by a long shot. 2 is doable — slightly under the 25 megabit speed (that’s bits… divide by 8 to get bytes) we use to define broadband download speeds. So how do you turn a large file into a small one? Compression — we’ve been doing it for a long time, and video, though different from other types of data in some ways, is still just a bunch of zeroes and ones. In fact it’s especially susceptible to strong compression because of how one video frame is usually very similar to the last and the next one. There are all kinds of shortcuts you can take that reduce the file size immensely without noticeably impacting the quality of the video. These compression and decompression techniques fit into a system called a “codec.” But there are exceptions to that, and one of them has to do with how compression handles color and brightness. Basically, when the image is very dark, it can’t display color very well. The color of winter Think about it like this: There are only so many ways to describe colors in a few words. If you have one word you can say red, or maybe ochre or vermilion depending on your interlocutor’s vocabulary. But if you have two words you can say dark red, darker red, reddish black, and so on. The codec has a limited vocabulary as well, though its “words” are the numbers of bits it can use to describe a pixel. This lets it succinctly describe a huge array of colors with very little data by saying, this pixel has this bit value of color, this much brightness, and so on. (I didn’t originally want to get into this, but this is what people are talking about when they say bit depth, or even “highest quality pixels.) But this also means that there are only so many gradations of color and brightness it can show. Going from a very dark grey to a slightly lighter grey, it might be able to pick 5 intermediate shades. That’s perfectly fine if it’s just on the hem of a dress in the corner of the image. But what if the whole image is limited to that small selection of shades? Then you get what we see last night. See how Jon (I think) is made up almost entirely of only a handful of different colors (brightnesses of a similar color, really) in with big obvious borders between them? This issue is called “banding,” and it’s hard not to notice once you see how it works. Images on video can be incredibly detailed, but places where there are subtle changes in color — often a clear sky or some other large but mild gradient — will exhibit large stripes as the codec goes from “darkest dark blue” to “darker dark blue” to “dark blue,” with no “darker darker dark blue” in between. Check out this image. Above is a smooth gradient encoded with high color depth. Below that is the same gradient encoded with lossy JPEG encoding — different from what HBO used, obviously, but you get the idea. Banding has plagued streaming video forever, and it’s hard to avoid even in major productions — it’s just a side effect of representing color digitally. It’s especially distracting because obviously our eyes don’t have that limitation. A high-definition screen may actually show more detail than your eyes can discern from couch distance, but color issues? Our visual systems flag them like crazy. You can minimize it, but it’s always going to be there, until the point when we have as many shades of grey as we have pixels on the screen. So back to last night’s episode. Practically the entire show took place at night, which removes about 3/4 of the codec’s brightness-color combos right there. It also wasn’t a particularly colorful episode, a directorial or photographic choice that highlighted things like flames and blood, but further limited the ability to digitally represent what was on screen. It wouldn’t be too bad if the background was black and people were lit well so they popped out, though. The last straw was the introduction of the cloud, fog, or blizzard, whatever you want to call it. This kept the brightness of the background just high enough that the codec had to represent it with one of its handful of dark greys, and the subtle movements of fog and smoke came out as blotchy messes (often called “compression artifacts” as well) as the compression desperately tried to pick what shade was best for a group of pixels. Just brightening it doesn’t fix things, either — because the detail is already crushed into a narrow range of values, you just get a bandy image that never gets completely black, making it look washed out, as you see here: (Anyway, the darkness is a stylistic choice. You may not agree with it, but that’s how it’s supposed to look and messing with it beyond making the darkest details visible could be counterproductive.) Now, it should be said that compression doesn’t have to be this bad. For one thing, the more data it is allowed to use, the more gradations it can describe, and the less severe the banding. It’s also possible (though I’m not sure where it’s actually done) to repurpose the rest of the codec’s “vocabulary” to describe a scene where its other color options are limited. That way the full bandwidth can be used to describe a nearly monochromatic scene even though strictly speaking it should be only using a fraction of it. But neither of these are likely an option for HBO: Increasing the bandwidth of the stream is costly, since this is being sent out to tens of millions of people — a bitrate increase big enough to change the quality would also massively swell their data costs. When you’re distributing to that many people, that also introduces the risk of hated buffering or errors in playback, which are obviously a big no-no. It’s even possible that HBO lowered the bitrate because of network limitations — “Game of Thrones” really is on the frontier of digital distribution. And using an exotic codec might not be possible because only commonly used commercial ones are really capable of being applied at scale. Kind of like how we try to use standard parts for cars and computers. This episode almost certainly looked fantastic in the mastering room and FX studios, where they not only had carefully calibrated monitors with which to view it but also were working with brighter footage (it would be darkened to taste by the colorist) and less or no compression. They might not even have seen the “final” version that fans “enjoyed.” We’ll see the better copy eventually, but in the meantime the choice of darkness, fog, and furious action meant the episode was going to be a muddy, glitchy mess on home TVs. And while we’re on the topic… You mean it’s not my TV? Well… to be honest, it might be that too. What I can tell you is that simply having a “better” TV by specs, such as 4K or a higher refresh rate or whatever, would make almost no difference in this case. Even built-in de-noising and de-banding algorithms would be hard pressed to make sense of “The Long Night.” And one of the best new display technologies, OLED, might even make it look worse! Its “true blacks” are much darker than an LCD’s backlit blacks, so the jump to the darkest grey could be way more jarring. That said, it’s certainly possible that your TV is also set up poorly. Those of us sensitive to this kind of thing spend forever fiddling with settings and getting everything just right for exactly this kind of situation. There are dozens of us! Now who’s “wasting his time” calibrating his TV? — John Siracusa (@siracusa) Usually “calibration” is actually a pretty simple process of making sure your TV isn’t on the absolute worst settings, which unfortunately many are out of the box. Here’s a very basic three-point guide to “calibrating” your TV: Go through the “picture” or “video” menu and turn off anything with a special name, like “TrueMotion,” “Dynamic motion,” “Cinema mode,” or anything like that. Most of these make things look worse, especially anything that “smooths” motion. Turn those off first and never ever turn them on again. Don’t mess with brightness, gamma, color space, anything you have to turn up or down from 50 or whatever. Figure out lighting by putting on a good, well-shot movie in the situation you usually watch stuff — at night maybe, with the hall light on or whatever. While the movie is playing, click through any color presets your TV has. These are often things like “natural,” “game,” “cinema,” “calibrated,” and so on and take effect right away. Some may make the image look too green, or too dark, or whatever. Play around with it and whichever makes it look best, use that one. You can always switch later – I myself switch between a lighter and darker scheme depending on time of day and content. Don’t worry about HDR, dynamic lighting, and all that stuff for now. There’s a lot of hype about these technologies and they are still in their infancy. Few will work out of the box and the gains may or may not be worth it. The truth is a well shot movie from the ’60s or ’70s can look just as good today as a “high dynamic range” show shot on the latest 8K digital cinema rig. Just focus on making sure the image isn’t being actively interfered with by your TV and you’ll be fine. Unfortunately none of these things will make “The Long Night” look any better until HBO releases a new version of it. Those ugly bands and artifacts are baked right in. But if you have to blame anyone, blame the streaming infrastructure that wasn’t prepared for a show taking risks in its presentation, risks I would characterize as bold and well executed, unlike the writing in the show lately. Oops, sorry, couldn’t help myself. If you really want to experience this show the way it was intended, the fanciest TV in the world wouldn’t have helped last night, though when the Blu-ray comes out you’ll be in for a treat. But here’s hoping the next big battle takes place in broad daylight.
Is this the vertical-folding Motorola Razr?

Is this the vertical-folding Motorola Razr?

10:02am, 29th April, 2019
This could be the upcoming Razr revival. The appeared online on Weibo and show a foldable design. Unlike Galaxy Fold, though, Motorola’s implementation has the phone folding vertical — much like the original Razr. This design offers a more compelling use case than other foldables. Instead of traditional smartphone unfolding to a tablet-like display, Motorola’s design has a smaller device unfolding to a smartphone display. The result is a smaller phone turning into a normal phone. Pricing is still unclear but the WSJ previously stated it would carry a $1,500 cost when it’s eventually released. If it’s released. Samsung was the first to market with the Galaxy Fold. Kind of. A few journalists were given Galaxy Fold units ahead of its launch, but a handful of units failed in the first days. Samsung quickly postponed the launch and recalled all the review units. Despite this leak, Motorola has yet to confirm when this device will hit the market. , it will likely be extra cautious before launching it to the general public.
Kiwi’s food delivery bots are rolling out to 12 more colleges

Kiwi’s food delivery bots are rolling out to 12 more colleges

3:09pm, 25th April, 2019
If you’re a student at UC Berkeley, are probably a familiar sight by now, trundling along with a burrito inside to deliver to a dorm or apartment building. Now students at a dozen more campuses will be able to join this great, lazy future of robotic delivery as Kiwi expands to them with a clever student-run model. Speaking recently at at the Berkeley campus, Kiwi’s Felipe Chavez and Sasha Iatsenia discussed the success of their burgeoning business and the way they planned to take it national. In case you’re not aware of the Kiwi model, it’s basically this: When you place an order online with a participating restaurant, you have the option of delivery via Kiwi. If you so choose, one of the company’s fleet of knee-high robots with insulated, locking storage compartments will swing by the place, your order is put within, and it brings it to your front door (or as close as it can reasonably get). You can even watch the last bit live from the robot’s perspective as it rolls up to your place. The robots are what Kiwi calls “semi-autonomous.” This means that although they can navigate most sidewalks and avoid pedestrians, each has a human monitoring it and setting waypoints for it to follow, on average every five seconds. Iatsenia told me that they’d tried going full autonomous and that it worked… most of the time. But most of the time isn’t good enough for a commercial service, so they’ve got humans in the loop. They’re working on improving autonomy, but for now this is how it is. That the robots are being controlled in some fashion by a team of people in Colombia (from where the co-founders hail) does take a considerable amount of the futurism out of this endeavor, but on reflection it’s kind of a natural evolution of the existing delivery infrastructure. After all, someone has to drive the car that brings you your food, as well. And in reality, most AI is operated or informed directly or indirectly by actual people. That those drivers are in South America operating multiple vehicles at a time is a technological advance over your average delivery vehicle — though it must be said that there is an unsavory air of offshoring labor to save money on wages. That said, few people shed tears over the wages earned by the Chinese assemblers who put together our smartphones and laptops, or the garbage pickers who separate your poorly sorted recycling. The global labor economy is a complicated one, and the company is making jobs in the place it was at least partly born. Whatever the method, Kiwi has traction: it’s done more than 50,000 deliveries and the model seems to have proven itself. Customers are happy, they get stuff delivered more than ever once they get the app and there are fewer and fewer incidents where a robot is kicked over or, you know, . Notably, the founders said onstage, the community has really adopted the little vehicles, and should one overturn or be otherwise interfered with, it’s often set on its way soon after by a passerby. Iatsenia and Chavez think the model is ready to push out to other campuses, where a similar effort will have to take place — but rather than do it themselves by raising millions and hiring staff all over the country, they’re trusting the robotics-loving student groups at other universities to help out. For a small and low-cash startup like Kiwi, it would be risky to overextend by taking on a major round and using that to scale up. They started as robotics enthusiasts looking to bring something like this to their campus, so why can’t they help others do the same? So the team looked at dozens of universities, narrowing them down by factors important to robotic delivery: layout, density, commercial corridors, demographics and so on. Ultimately they arrived at the following list: Northern Illinois University University of Oklahoma Purdue University Texas A&M Parsons Cornell East Tennessee State University University of Nebraska-Lincoln Stanford Harvard NYU Rutgers What they’re doing is reaching out to robotics clubs and student groups at those colleges to see who wants to take partial ownership of Kiwi administration out there. Maintenance and deployment would still be handled by Berkeley students, but the student clubs would go through a certification process and then do the local work, like a capsized bot and on-site issues with customers and restaurants. “We are exploring several options to work with students down the road, including rev share,” Iatsenia told me. “It depends on the campus.” So far they’ve sent 40 robots to the 12 campuses listed and will be rolling out operations as the programs move forward on their own time. If you’re not one of the unis listed, don’t worry — if this goes the way Kiwi plans, it sounds like you can expect further expansion soon.
Kiwi’s food delivery bots are rolling out to 12 new colleges

Kiwi’s food delivery bots are rolling out to 12 new colleges

1:00pm, 25th April, 2019
If you’re a student at UC Berkeley, are probably a familiar sight by now, trundling along with a burrito inside to deliver to a dorm or apartment building. Now students at a dozen more campuses will be able to join this great, lazy future of robotic delivery as Kiwi expands to them with a clever student-run model. Speaking at at the Berkeley campus, Kiwi’s Felipe Chavez and Sasha Iatsenia discussed the success of their burgeoning business and the way they planned to take it national. In case you’re not aware of the Kiwi model, it’s basically this: When you place an order online with a participating restaurant, you have the option of delivery via Kiwi. If you so choose, one of the company’s fleet of knee-high robots with insulated, locking storage compartments will swing by the place, your order is put within, and it brings it to your front door (or as close as it can reasonably get). You can even watch the last bit live from the robot’s perspective as it rolls up to your place. The robots are what Kiwi calls “semi-autonomous.” This means that although they can navigate most sidewalks and avoid pedestrians, each has a human monitoring it and setting waypoints for it to follow, on average every five seconds. Iatsenia told me that they’d tried going full autonomous and that it worked… most of the time. But most of the time isn’t good enough for a commercial service, so they’ve got humans in the loop. They’re working on improving autonomy but for now this is how it is. That the robots are being controlled in some fashion by a team of people in Colombia (where the co-founders hail from) does take a considerable amount of the futurism out of this endeavor, but on reflection it’s kind of a natural evolution of the existing delivery infrastructure. After all, someone has to drive the car that brings you your food as well. And in reality most AI is operated or informed directly or indirectly by actual people. That those drivers are in South America operating multiple vehicles at a time is a technological advance over your average delivery vehicle — though it must be said that there is an unsavory air of offshoring labor to save money on wages. That said, few people shed tears over the wages earned by the Chinese assemblers who put together our smartphones and laptops, or the garbage pickers who separate your poorly sorted recycling. The global labor economy is a complicated one, and the company is making jobs in the place it was at least partly born. Whatever the method, Kiwi has traction: it’s done more than 50,000 deliveries and the model seems to have proven itself. Customers are happy, they get stuff delivered more than ever once they get the app, and there are fewer and fewer incidents where a robot is kicked over or, you know, . Notably, the founders said on stage, the community has really adopted the little vehicles, and should one overturn or be otherwise interfered with, it’s often set on its way soon after by a passerby. Iatsenia and Chavez think the model is ready to push out to other campuses, where a similar effort will have to take place — but rather than do it themselves by raising millions and hiring staff all over the country, they’re trusting the robotics-loving student groups at other universities to help out. For a small and low-cash startup like Kiwi, it would be risky to overextend by taking on a major round and using that to scale up. They started as robotics enthusiasts looking to bring something like this to their campus, so why can’t they help others do the same? So the team looked at dozens of universities, narrowing them down by factors important to robotic delivery: layout, density, commercial corridors, demographics, and so on. Ultimately they arrived at the following list: Northern Illinois University University of Oklahoma Purdue University Texas A&M Parsons Cornell East Tennessee State University Nebraska University-Lincoln Stanford Harvard NYU Rutgers What they’re doing is reaching out to robotics clubs and student groups at those colleges to see who wants to take partial ownership of Kiwi administration out there. Maintenance and deployment would still be handled by Berkeley students, but the student clubs would go through a certification process and then do the local work, like a capsized bot and on-site issues with customers and restaurants. “We are exploring several options to work with students down the road including rev share,” Iatsenia told me. “It depends on the campus.” So far they’ve sent out 40 robots to the 12 campuses listed and will be rolling out operations as the programs move forward on their own time. If you’re not one of the unis listed, don’t worry — if this goes the way Kiwi plans, it sounds like you can expect further expansion soon.
LEGO Braille bricks are the best, nicest and, in retrospect, most obvious idea ever

LEGO Braille bricks are the best, nicest and, in retrospect, most obvious idea ever

5:31pm, 24th April, 2019
Braille is a crucial skill to learn for children with visual impairments, and with these , kids can learn through hands-on play rather than more rigid methods like Braille readers and printouts. Given the naturally Braille-like structure of LEGO blocks, it’s surprising this wasn’t done decades ago. The truth is, however, that nothing can be obvious enough when it comes to marginalized populations like people with disabilities. But sometimes all it takes is someone in the right position to say “You know what? That’s a great idea and we’re just going to do it.” It happened with the (above). and it seems to have happened at LEGO. Stine Storm led the project, but Morten Bonde, who himself suffers from degenerating vision, helped guide the team with the passion and insight that only comes with personal experience. In some remarks sent over by LEGO, Bonde describes his drive to help: When I was contacted by the LEGO Foundation to function as internal consultant on the LEGO Braille Bricks project, and first met with Stine Storm, where she showed me the Braille bricks for the first time, I had a very emotional experience. While Stine talked about the project and the blind children she had visited and introduced to the LEGO Braille Bricks I got goose bumps all over the body. I just knew that I had to work on this project. I want to help all blind and visually impaired children in the world dare to dream and see that life has so much in store for them. When, some years ago, I was hit by stress and depression over my blind future, I decided one day that life is too precious for me not to enjoy every second of. I would like to help give blind children the desire to embark on challenges, learn to fail, learn to see life as a playground, where anything can come true if you yourself believe that they can come true. That is my greatest ambition with my participation in the LEGO Braille Bricks project The bricks themselves are very much like the originals, specifically the common 2×4 blocks, except they don’t have the full eight “studs” (so that’s what they’re called). Instead, they have the letters of the Braille alphabet, which happens to fit comfortably in a 2×3 array of studs, with room left on the bottom to put a visual indicator of the letter or symbol for sighted people. It’s compatible with ordinary LEGO bricks, and of course they can be stacked and attached to themselves, though not with quite the same versatility as an ordinary block, as some symbols will have fewer studs. You’ll probably want to keep them separate, since they’re more or less identical unless you inspect them individually. [gallery ids="1816767,1816769,1816776,1816772,1816768"] All told, the set, which will be provided for free to institutions serving vision-impaired students, will include about 250 pieces: A-Z (with regional variants), the numerals 0-9, basic operators like + and =, and some “inspiration for teaching and interactive games.” Perhaps some specialty pieces for word games and math toys, that sort of thing. LEGO was already one of the toys that can be enjoyed equally by sighted and vision-impaired children, but this adds a new layer, or I suppose just re-engineers an existing and proven one, to extend and specialize the decades-old toy for a group that already seems already to have taken to it: “The children’s level of engagement and their interest in being independent and included on equal terms in society is so evident. I am moved to see the impact this product has on developing blind and visually impaired children’s academic confidence and curiosity already in its infant days,” said Bonde. Danish, Norwegian, English and Portuguese blocks are being tested now, with German, Spanish and French on track for later this year. The kit should ship in 2020 — if you think your classroom could use these, get in touch with LEGO right away.
LEGO Braille bricks are the best, nicest, and in retrospect most obvious idea ever

LEGO Braille bricks are the best, nicest, and in retrospect most obvious idea ever

3:21pm, 24th April, 2019
Braille is a crucial skill for children with visual impairments to learn, and with these kids can learn through hands-on play rather than more rigid methods like Braille readers and printouts. Given the naturally Braille-like structure of LEGO blocks, it’s surprising this wasn’t done decades ago. The truth is, however, that nothing can be obvious enough when it comes to marginalized populations like people with disabilities. But sometimes all it takes is someone in the right position to say “You know what? That’s a great idea and we’re just going to do it.” It happened with the (above) and it seems to have happened at LEGO. Stine Storm led the project, but Morten Bonde, who himself suffers from degenerating vision, helped guide the team with the passion and insight that only comes with personal experience. In some remarks sent over by LEGO, Bonde describes his drive to help: When I was contacted by the LEGO Foundation to function as internal consultant on the LEGO Braille Bricks project, and first met with Stine Storm, where she showed me the Braille bricks for the first time, I had a very emotional experience. While Stine talked about the project and the blind children she had visited and introduced to the LEGO Braille Bricks I got goose bumps all over the body. I just knew that I had to work on this project. I want to help all blind and visually impaired children in the world dare to dream and see that life has so much in store for them. When, some years ago, I was hit by stress and depression over my blind future, I decided one day that life is too precious for me not to enjoy every second of. I would like to help give blind children the desire to embark on challenges, learn to fail, learn to see life as a playground, where anything can come true if you yourself believe that they can come true. That is my greatest ambition with my participation in the LEGO Braille Bricks project The bricks themselves are very like the originals, specifically the common 2×4 blocks, except they don’t have the full 8 “studs” (so that’s what they’re called). Instead, they have the letters of the Braille alphabet, which happens to fit comfortably in a 2×3 array of studs, with room left on the bottom to put a visual indicator of the letter or symbol for sighted people. It’s compatible with ordinary LEGO bricks and of course they can be stacked and attached themselves, though not with quite the same versatility as an ordinary block, since some symbols will have fewer studs. You’ll probably want to keep them separate, since they’re more or less identical unless you inspect them individually. [gallery ids="1816767,1816769,1816776,1816772,1816768"] All told the set, which will be provided for free to institutions serving vision-impaired students, will include about 250 pieces: A-Z (with regional variants), the numerals 0-9, basic operators like + and =, and some “inspiration for teaching and interactive games.” Perhaps some specialty pieces for word games and math toys, that sort of thing. LEGO was already one of the toys that can be enjoyed equally by sighted and vision-impaired children, but this adds a new layer, or I suppose just re-engineers an existing and proven one, to extend and specialize the decades-old toy for a group that already seems already to have taken to it: “The children’s level of engagement and their interest in being independent and included on equal terms in society is so evident. I am moved to see the impact this product has on developing blind and visually impaired children’s academic confidence and curiosity already in its infant days,” said Bonde. Danish, Norwegian, English, and Portuguese blocks are being tested now, with German, Spanish and French on track for later this year. The kit should ship in 2020 — if you think your classroom could use these, get in touch with LEGO right away.
Huawei’s P30 Pro excels on the camera front

Huawei’s P30 Pro excels on the camera front

1:11pm, 24th April, 2019
It’s been a month since Huawei its latest flagship device — the Huawei P30 Pro. I’ve played with the P30 and P30 Pro for a few weeks and I’ve been impressed with the camera system. The P30 Pro is the successor to and features improvements across the board. It could have been a truly remarkable phone, but some issues still hold it back compared to more traditional Android phones, such as the or . A flagship device The P30 Pro is by far the most premium device in the P line. It features a gigantic 6.47-inch OLED display, a small teardrop notch near the top, an integrated fingerprint sensor in the display and a lot of cameras. Before diving into the camera system, let’s talk about the overall feel of the device. Compared to last year’s P20 Pro, the company removed the fingerprint sensor at the bottom of the screen and made the notch smaller. The fingerprint sensor doesn’t perform as well as a dedicated fingerprint sensor, but it gets the job done. It has become hard to differentiate smartphones based on design as it looks a lot like the OnePlus 6T or the Samsung Galaxy S10. The display features a 19.5:9 aspect ratio with a 2340×1080 resolution, and it is curved around the edges. The result is a phone with gentle curves. The industrial design is less angular, even though the top and bottom edges of the device have been flattened. Huawei uses an aluminum frame and a glass with colorful gradients on the back of the device. Unfortunately, the curved display doesn’t work so well in practice. If you open an app with a unified white background, such as Gmail, you can see some odd-looking shadows near the edges. Below the surface, the P30 Pro uses a Kirin 980 system-on-a-chip. Huawei’s homemade chip performs well. To be honest, smartphones have been performing well for a few years now. It’s hard to complain about performance anymore. The phone features a headphone jack, a 40W USB-C charging port and an impressive 4,200 mAh battery. For the first time, Huawei added wireless charging to the P series (up to 15W). You can also charge another phone or an accessory with reverse wireless charging, just like on the Samsung Galaxy S10. Unfortunately, you have to manually activate the feature in the settings every time you want to use it. Huawei has also removed the speaker grill at the top of the display. The company now vibrates the screen in order to turn the screen into a tiny speaker for your calls. In my experience, it works well. While the phone ships with Android Pie, Huawei still puts a lot of software customization with its EMUI user interface. There are a dozen useless Huawei apps that probably make sense in China, but don’t necessarily need to be there if you use Google apps. For instance, the HiCare app keeps sending me notifications. The onboarding process is also quite confusing as some screens refer to Huawei features while others refer to standard Android features. It definitely won’t be a good experience for non tech-savvy people. (P30 Pro on the left, P30 on the right) Four cameras to rule them all The P20 Pro already had some great camera sensors and paved the way for night photos in recent Android devices. The P30 Pro camera system can be summed up in two words — more and better. The P30 Pro now features not one, not two, not three but f-o-u-r sensors on the back of the device. The main camera is a 40 MP 27mm sensor with an f/1.6 aperture and optical image stabilization. There’s a 20 MP ultra-wide angle lens (16mm) with an f/2.2 aperture. The 8 MP telephoto lens provides nearly 5x optical zoom compared to the main lens (125mm) with an f/3.4 aperture and optical image stabilization. There’s a new time-of-flight sensor below the flash of the P30 Pro. The phone projects infrared light and captures the reflection with this new sensor. It has become a sort of a meme already — yes, the zoom works incredibly well on the P30 Pro. In addition to packing a lot of megapixels in the main sensor, the company added a telephoto lens with a periscope design. The sensor features a mirror to beam the light at a right angle and put more layers of glass in the sensor without making the phone too thick. The company also combines the main camera sensor with the telephoto sensor to let you capture photos with a 10x zoom with a hybrid digital-optical zoom. Here’s a photo series with the wide angle lens, the normal lens, a 5x zoom and a 10x zoom: And it works incredibly well in daylight. Unfortunately, you won’t be able to use the telephoto lens at night as it doesn’t perform as well as the main camera. In addition to hardware improvements, Huawei has also worked on the algorithms that process your shots. Night mode performs incredibly well. You just have to hold your phone for 8 seconds so that it can capture as much light as possible. Here’s what it looks like in a completely dark room vs. an iPhone X: Huawei has also improved HDR processing and portrait photos. That new time-of-flight sensor works well when it comes to distinguishing a face from the background for instance. Once again, Huawei is a bit too heavy-handed with post-processing. If you use your camera with the Master AI setting, colors are too saturated. The grass appears much greener than it is in reality. Skin smoothing with the selfie camera still feels weird too. The phone also aggressively smoothes surfaces on dark shots. When you pick a smartphone brand, you also pick a certain photography style. I’m not a fan of saturated photos, so Huawei’s bias toward unnatural colors doesn’t work in my favor. But if you like extremely vivid shots with insanely good sensors the P30 Pro is for you. That array of lenses opens up a lot of possibilities and gives you more flexibility. Fine prints The P30 Pro isn’t available in the U.S. But the company has already covered the streets of major European cities with P30 Pro ads. It costs €999 ($1,130) for 128GB of storage — there are more expensive options with more storage. Huawei also unveiled a smaller device — the P30. It’s always interesting to look at the compromises of the more affordable model. On that front, there’s a lot to like about the P30. For €799 ($900) with 128GB, you get a solid phone. It has a 6.1-inch OLED display and shares a lot of specifications with its bigger version. The P30 features the same system-on-a-chip, the same teardrop notch, the same fingerprint sensor in the display, the same screen resolution. Surprisingly, the P30 Pro doesn’t have a headphone jack while the P30 has one. There are some things you won’t find on the P30, such as wireless charging or the curved display. While the edges of the device are slightly curved, the display itself is completely flat. And I think it looks better. Cameras are slightly worse on the P30, and you won’t be able to zoom in as aggressively. Here’s the full rundown: A 40 MP main sensor with an f/1.8 aperture and optical image stabilization. A 16 MP ultra-wide angle lens with an f/2.2 aperture. An 8 MP telephoto lens that should provide 3x optical zoom. No time-of-flight sensor. In the end, it really depends on what you’re looking for. The P30 Pro definitely has the best cameras of the P series. But the P30 is also an attractive phone for those looking for a smaller device. Huawei has once again pushed the limits of what you can pack in a smartphone when it comes to cameras. While iOS and Android are more mature than ever, it’s fascinating to see that hardware improvements are not slowing down.
Alphabet’s Wing gets FAA permission to start delivering by drone

Alphabet’s Wing gets FAA permission to start delivering by drone

1:24pm, 23rd April, 2019
Wing Aviation, the drone-based delivery startup born out of X labs, has the first FAA certification in the country for commercial carriage of goods. It might not be long before you’re getting your burritos sent par avion. The company has been performing tests for years, making thousands of flights and supervised deliveries to show that its drones are safe and effective. Many of those flights were in Australia, where in suburban Canberra the company recently . Finland and other countries are also in the works.. Wing’s first operations, starting later this year, will be in Blackburg and Christiansburg, VA; obviously an operation like this requires close coordination with municipal authorities as well as federal ones. You can’t just get a permission slip from the FAA and start flying over everyone’s houses. “Wing plans to reach out to the local community before it begins food delivery, to gather feedback to inform its future operations,” the FAA writes in a press release. Here’s hoping that means you can choose whether or not these loud little aircraft will be able to pass through your airspace. Although the obvious application is getting a meal delivered quick even when traffic is bad, there are plenty of other applications. One imagines quick delivery of medications ahead of EMTs, or blood being transferred quickly between medical centers. I’ve asked Wing for more details on its plans to roll this out elsewhere in the U.S., and will update this story if I hear back.
How Squishy Robotics created a robot that can be safely dropped out of a helicopter

How Squishy Robotics created a robot that can be safely dropped out of a helicopter

8:53pm, 18th April, 2019
If you want to build a robot that can fall hundreds of feet and be no worse the wear, legs are pretty much out of the question. The obvious answer, then, is a complex web of cable-actuated rods. Obvious to anyway, whose robots look delicate but are in fact among the most durable out there. The startup has been operating more or less in stealth mode, emerging publicly today onstage at our Robotics + AI Sessions event in Berkeley, Calif. It began, co-founder and CEO Alice Agogino told me, as a project connected to NASA Ames a few years back. “The original idea was to have a robot that could be dropped from a spacecraft and survive the fall,” said Agogino. “But I could tell this tech had earthly applications.” Her reason for thinking so was learning that first responders were losing their lives due to poor situational awareness in areas they were being deployed. It’s hard to tell without actually being right there that a toxic gas is lying close to the ground, or that there is a downed electrical line hidden under a fallen tree, and so on. Robots are well-suited to this type of reconnaissance, but it’s a bit of a Catch-22: You have to get close to deploy a robot, but you need the robot there to get close enough in the first place. Unless, of course, you can somehow deploy the robot from the air. This is already done, but it’s rather clumsy: picture a wheeled bot floating down under a parachute, missing its mark by a hundred feet due to high winds or getting tangled in its own cords. “We interviewed a number of first responders,” said Agogino. “They told us they want us to deploy ground sensors before they get there, to know what they’re getting into; then when they get there they want something to walk in front of them.” Squishy’s solution can’t quite be dropped from orbit, as the original plan was for exploring Saturn’s moon Titan, but they can fall from 600 feet, and likely much more than that, and function perfectly well afterwards. It’s all because of the unique “tensegrity structure,” which looks like a game of pick-up-sticks crossed with cat’s cradle. (Only use the freshest references for you, reader.) If it looks familiar, you’re probably thinking of the structures famously studied by Buckminster Fuller, and they’re related but quite different. This one had to be engineered not just to withstand great force from dropping, but to shift in such a way that it can walk or crawl along the ground and even climb low obstacles. That’s a nontrivial shift away from the buckyball and other geodesic types. “We looked at lots of different tensegrity structures — there are an infinite number,” Agogino said. “It has six compressive elements, which are the bars, and 24 other elements, which are the cables or wires. But they could be shot out of a cannon and still protect the payload. And they’re so compliant, you could throw them at children, basically.” (That’s not the mission, obviously. But there are in fact children’s toys with tensegrity-type designs.) Inside the bars are wires that can be pulled or slackened to cause to move the various points of contact with the ground, changing the center of gravity and causing the robot to roll or spin in the desired direction. A big part of the engineering work was making the tiny motors to control the cables, and then essentially inventing a method of locomotion for this strange shape. “On the one hand it’s a relatively simple structure, but it’s complicated to control,” said Agogino. “To get from A to B there are any number of solutions, so you can just play around — we even had kids do it. But to do it quickly and accurately, we used machine learning and AI techniques to come up with an optimum technique. First we just created lots of motions and observed them. And from those we found patterns, different gaits. For instance if it has to squeeze between rocks, it has to change its shape to be able to do that.” The mobile version would be semi-autonomous, meaning it would be controlled more or less directly but figure out on its own the best way to accomplish “go forward” or “go around this wall.” The payload can be customized to have various sensors and cameras, depending on the needs of the client — one being deployed at a chemical spill needs a different loadout than one dropping into a radioactive area, for instance. To be clear, these things aren’t going to win in an all-out race against a Spot or a wheeled robot on unbroken pavement. But for one thing, those are built specifically for certain environments and there’s room for more all-purpose, adaptable types. And for another, neither one of those can be dropped from a helicopter and survive. In fact, almost no robots at all can. “No one can do what we do,” Agogino preened. At a recent industry demo day where robot makers showed off air-drop models, “we were the only vendor that was able to do a successful drop.” And although the tests only went up to a few hundred feet, there’s no reason that Squishy’s bots shouldn’t be able to be dropped from 1,000, or for that matter 50,000 feet up. They hit terminal velocity after a relatively short distance, meaning they’re hitting the ground as hard as they ever will, and working just fine afterwards. That has plenty of parties interested in what Squishy is selling. The company is still extremely small and has very little funding: mainly a $500,000 grant from NASA and $225,000 from the . But they’re also working from UC Berkeley’s Skydeck accelerator, which has already put them in touch with a variety of resources and entrepreneurs, and the upcoming May 14 demo day will put their unique robotics in front of hundreds of VCs eager to back the latest academic spin-offs. You can keep up with the latest from the company , or of course this one.
Next iPhone could feature an ultra-wide lens

Next iPhone could feature an ultra-wide lens

12:13pm, 18th April, 2019
A new report from analyst Ming-Chi Kuo details the cameras in the next-generation iPhones. The report confirms previous rumors — the successors of the iPhone XS and XS Max will have three camera sensors on the back of the device. In addition to the main camera and the 2x camera, Apple could add an ultra-wide 12-megapixel lens. Many Android phones already feature an ultra-wide lens, so it makes sense that Apple is giving you more flexibility by adding a third camera. Kuo thinks Apple will use a special coating on the camera bump to hide the lenses. It’s true that pointing three cameras at someone is starting to look suspicious. and shared the following render (without any special coating) a few months ago: The iPhone XR update will feature two cameras instead of one. I bet Apple will add a 2x camera. On the front of the device, Apple could be planning a big upgrade for the selfie camera. The company could swap the existing camera sensor with 4 layers of glass with a camera sensor that has 5 layers of glass. Apple could also be giving the camera a resolution bump, jumping from 7 megapixels to 12 megapixels. All three models should get the new selfie camera.
Google Home’s Philips Hue integration can now wake you up gently

Google Home’s Philips Hue integration can now wake you up gently

12:23pm, 17th April, 2019
Maybe you love the sound of your alarm clock blaring in the morning, heralding a new day full of joy and adventure. More likely, though, you don’t. If you prefer a more gentle wake-up (and have invested in some smart home technology), here’s some good news: Home lets you use your lights to wake you up by slowly changing the light in your room. first this integration at CES earlier this year, with a planned rollout in March. Looks like that took a little while longer, as Google and Philips gently brought this feature to life. Just like you can use your Home to turn on “Gentle Wake,” which starts changing your lights 30 minutes before your wake-up time to mimic a sunrise, you also can go the opposite way and have the lights mimic sunset as you get ready to go to bed. You can either trigger these light changes through an alarm or with a command that starts them immediately. While the price of white Hue bulbs has come down in recent years, colored hue lights remain rather pricey, with single bulbs going for around $40. If that doesn’t hold you back, though, the Gentle Sleep and Wake features are now available in the U.S., U.K., Canada, Australia, Singapore and India in English only.
Meet the first judges for The Europas Awards (27 June) and enter your startup now!

Meet the first judges for The Europas Awards (27 June) and enter your startup now!

10:13am, 17th April, 2019
I’m excited to announce that is really shaping up! The awards will be held on 27 June 2019, in London, UK on the front lawn of the in Hoxton, London — creating a fantastic and fun, garden party atmosphere in the heart of London’s tech startup scene. TechCrunch is once more the exclusive media sponsor of the awards and conference, alongside new ‘tech, culture & society’ event creator . You can nominate a startup, accelerator or venture investor which you think deserves to be recognized for their achievements in the last 12 months. *** The deadline for nominations is 1 May 2019. *** For the 2019 awards, we’ve overhauled the categories to a set that we believe better reflects the range of innovation, diversity and ambition we see in the European startups being built and launched today. There are now 20 categories including new additions to cover AgTech / FoodTech, SpaceTech, GovTech and Mobility Tech. Attendees, nominees and winners will get discounts to , later this year. The Europas “Diversity Pass” We’d like to encourage more diversity in tech! That’s why, for the upcoming invitation-only “Pathfounder” event held on the afternoon before The Europas Awards, we’ve reserved a tranche of free tickets to ensure that we include more women and people of colour who are “pre-seed” or “seed stage” tech startup founders to join us. If you are awomen founder or person of colour founder, for one of the limited free diversity passes to the event. The Pathfounder event will feature premium content and invitees, designed be a ‘fast download’ into the London tech scene for European founders looking to raise money or re-locate to London. The Europas Awards The Europas Awards results are based on voting by expert judges and the industry itself. But key to it is that there are no “off-limits areas” at The Europas, so attendees can mingle easily with VIPs. The complete list of categories is here: AgTech / FoodTech CleanTech Cyber EdTech FashTech FinTech Public, Civic and GovTech HealthTech MadTech (AdTech / MarTech) Mobility Tech PropTech RetailTech Saas/Enterprise or B2B SpaceTech Tech for Good Hottest Blockchain Project Hottest Blockchain Investor Hottest VC Fund Hottest Seed Fund Grand PrixTimeline of The Europas Awards deadlines: * 6 March 2019 – Submissions open* 1 May 2019 – Submissions close* 10 May 2019 – Public voting begins* 18 June 2019 – Public voting ends* 27 June 2019 – Awards Bash Amazing networking We’re also shaking up the awards dinner itself. Instead of a sit-down gala dinner, we’ve taken on your feedback for more opportunities to network. Our awards ceremony this year will be in the setting of a garden lawn party where you’ll be able to meet and mingle more easily with free-flowing drinks and a wide-selection of street food (including vegetarian/vegan). The ceremony itself will last approximately 75 minutes, with the rest of the time dedicated to networking. If you’d like to talk about sponsoring or exhibiting, please contact dianne@thepathfounder.com Instead of thousands and thousands of people, think of a great summer event with the most interesting and useful people in the industry, including key investors and leading entrepreneurs. The Europas Awards have been going for the last ten years and we’re the only independent and editorially driven event to recognise the European tech startup scene. The winners have been featured in Reuters, Bloomberg, VentureBeat, Forbes, Tech.eu, The Memo, Smart Company, Cnet, many others and of course, TechCrunch. • No secret VIP rooms, which means you get to interact with the Speakers • Key Founders and investors attending • Journalists from major tech titles, newspapers and business broadcasters Meet the first set of our 20 judges: Brent HobermanExecutive Chairman and Co-FounderFounders Factory Videesha BöckleFounding Partnersignals Venture Capital Bindi KariaInnovation Expert + Advisor, InvestorBindi Ventures Christian HernandezChristian Hernandez GallardoCo-Founder and Venture Partner at White Star Capital
Xbox One does away with discs in new $249 All-Digital Edition

Xbox One does away with discs in new $249 All-Digital Edition

4:55pm, 16th April, 2019
Discs! What are they good for? Well, they’re nice if you don’t want to be tied to an online-only ecosystem. But if you don’t mind that, latest Xbox One S “All-Digital Edition” might be for you. With no slots to speak of, the console is limited to downloading games to its drive — which is how we’ve been doing it on PC for quite some time. Announced during today’s “Inside Xbox” video presentation, the Xbox One S All-Digital Edition — honestly, why not just give it a different letter? — is identical to the existing One S except for, of course, not having a disc slot in the front. The Xbox One X (left) and S (center) are missing this valuable feature exclusive to the All-Digital Edition (right) The impact of the news was lessened somewhat by Sony’s of its next-generation console, revealing little — but enough to get gamers talking on a day Microsoft would have preferred was about its game ecosystem. But to return to the disc-free Xbox. “We’re not looking to push customers toward digital,” explained Microsoft’s Jeff Gattis in a press release. “It’s about meeting the needs of customers that are digital natives that prefer digital-based media. Given this is the first product of its kind, it will teach us things we don’t already know about customer preferences around digital and will allow us to refine those experiences in the future. We see this as a step forward in extending our offerings beyond the core console gamer.” The CPU and GPU are the same, RAM is the same, everything is the same. Even, unfortunately, the hard drive: a single lonely terabyte (imagine saying that a few years ago) that could fill up fast if every game has to be downloaded in full rather than loaded from disc. It’s also the exact same shape and size as the S, which seems like a missed opportunity — they couldn’t make it a little smaller or thinner after taking out the whole Blu-ray assembly? Well, at least the original is a nice-looking little box to begin with. (“Changes that affect the form of a console can be complex and costly,” said Gattis.) At $249 it’s $50 cheaper than the disc-using edition, and comes with copies of Sea of Thieves, Minecraft and Forza Horizon 3. That’s a pretty decent value, I’d say. If you’re looking to break into the Xbox ecosystem and don’t want to clutter your place with a bunch of discs and cases, this is a nice option. Sea of Thieves had kind of a weak start but has grown quite a bit, FH3 is supposed to be solid and Minecraft is of course Minecraft. You may also want to spring for the new Xbox Game Pass Ultimate service, which combines Xbox Live Gold and Xbox Game Pass — meaning you get the usual online benefits as well as access to the growing Game Pass library. There’s enough there now that, with the games you get in the box, you shouldn’t have to buy much of anything until whatever Microsoft announces at E3 comes out. (There’s even a special offer for three months of Game Pass for a buck to get you started.) You can pre-order the All-Digital Edition (which really should have been called the Xbox One D) now, and it should ship and be available at retailers starting May 7.
Daily Crunch: Hands-on with the Samsung Galaxy Fold

Daily Crunch: Hands-on with the Samsung Galaxy Fold

2:46pm, 16th April, 2019
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can . 1. After eight years of teasing a folding device, Samsung finally pulled the trigger with an announcement at its developer’s conference late last year. But the device itself remained mysterious. Earlier this week, Brian Heater finally held the Galaxy Fold in his hands, and he was pretty impressed. 2. Some viewers following live coverage of the Notre-Dame Cathedral broadcast on YouTube were met with a strangely out-of-place info box offering facts about the September 11 attacks. Ironically, the feature is supposed to fact check topics that generate misinformation on the platform. 3. Disney now has a 67 percent ownership stake in Hulu — which it gained, in part, through its $71 billion acquisition of 21st Century Fox. Comcast has a 33 percent stake. 4. The “I” in question is our security reporter Zack Whittaker, who filed a Freedom of Information request with U.S. Citizenship and Immigration Services to obtain all of the files the government had collected on him in order to process his green card application. Seven months later, disappointment. 5. Video app TikTok has become a global success, but it stumbled hard in one of the world’s biggest mobile markets, India, over illicit content. 6. Canalys forecasts the installed base will grow by 82.4 percent, from 114 million units in 2018 to 207.9 million in 2019. 7. The company announced that it will integrate Salesforce.org — which had been a reseller of Salesforce software and services to the nonprofit sector — into Salesforce itself as part of a new nonprofit and education vertical.
Apple could build macOS feature to use your iPad as extra Mac display

Apple could build macOS feature to use your iPad as extra Mac display

12:36pm, 16th April, 2019
According to a , Apple is working on a feature that would let you pair your iPad with your Mac to turn your iPad into a secondary Mac display. That feature codenamed Sidecar could ship with macOS 10.15 this fall. If you’ve been using or , you’re already quite familiar with this setup. Those third-party hardware and software solutions let you turn your iPad into an external display. You can then extend your Mac display, move windows to your iPad and use your iPad like an external display. And it sounds like wants to turn those setups into a native feature. It could boost iPad sales for MacBook users, and MacBook sales for iPad users. Apple wants to simplify that feature as much as possible. According to 9to5mac, you would access it from the standard green “maximize” button in the corner of every window. You could hover over that button and send the window to an iPad. By default, apps will be maximized on the iPad and appear as full screen windows. Maybe you’ll be able to send multiple windows and split your display between multiple macOS apps, but that’s still unclear. Graphic designers are going to love that feature as you’ll be able to use the Apple Pencil. For instance, you could imagine sending the Photoshop window to your iPad and using your iPad as a Wacom tablet. Sidecar will also be compatible with standard external displays. It should make window management easier as you’ll be able to send windows to another display in just a click. Finally, 9to5mac says that Apple is also working on Windows-like resizing shortcuts — you could drag a window to the side of the screen to resize it to half of the screen for instance.
Talk all things robotics and AI with TechCrunch writers

Talk all things robotics and AI with TechCrunch writers

2:58pm, 15th April, 2019
This Thursday, we’ll be hosting our third annual at . The day is packed start-to-finish with intimate discussions on the state of robotics and deep learning with key founders, investors, researchers and technologists. The event will dig into recent developments in robotics and AI, which startups and companies are driving the market’s growth and how the evolution of these technologies may ultimately play out. In preparation for our event, TechCrunch’s spent time over the last several months visiting some of the top robotics companies in the country. Brian will be on the ground at the event, alongside , who will also be on the scene. Friday at 11:00 am PT, Brian and Lucas will be sharing with members (on a conference call) what they saw and what excited them most. Tune in to find out about what you might have missed and to ask Brian and Lucas anything else robotics, AI or hardware. And want to attend the event in Berkeley this week? . To listen to this and all future conference calls, become a member of Extra Crunch.
iOS 13 could feature dark mode and interface updates

iOS 13 could feature dark mode and interface updates

10:38am, 15th April, 2019
According to a , the next major version of iOS for the iPhone and iPad will feature many new features, such as universal dark mode, new gestures, visual changes for the volume popup and more. Dark mode should work more or less like dark mode on macOS Mojave. You’ll be able to turn on a system-wide option in Settings. Apps that support it will automatically switch to dark mode the next time you launch them. Let’s hope that third-party developers will support that feature. Otherwise, it would be a bit useless if Facebook, Instagram, Gmail or Amazon still feature blindingly white backgrounds. The other big change is that you’ll be able to open multiple windows of the same app on the iPad. You can already open two Safari tabs side by side, but it sounds like plans to expand that feature beyond Safari with a card metaphor. Each window will be represented as a card that you can move, stack or dismiss. Other iOS 13 features sound like minor improvements that should make iOS less frustrating. And it starts with new gestures. Instead of shaking your device to undo an action, users will be able to swipe with three fingers on the virtual keyboard to undo and redo a text insertion. Similarly, Apple could be working on a new way to select multiple items in a table view or grid view. You could just drag a rectangle around multiple items to select them. Once again, Apple is reusing a classic macOS feature on iOS. Some apps will receive updates, such as Mail and Reminders. The default email client will sort your emails in multiple categories (marketing, travel, etc.) just like in Gmail. Finally, that annoying volume popup could be on the way out. Apple could replace that popup with a more subtle volume indicator. Overall, the most exciting change is probably the ability to launch multiple windows of the same app. It’ll be interesting to see how Apple plans to implement that feature and what you’ll be able to do with that. Moving away from the traditional “one app = one document” metaphor could open up a lot of different workflows.
Spy on your smart home with this open source research tool

Spy on your smart home with this open source research tool

6:41am, 13th April, 2019
Researchers at have built a web app that lets you (and them) spy on your smart home devices to see what they’re up to. The open source tool, called IoT Inspector, is available for download . (Currently it’s Mac OS only, with a wait list for Windows or Linux.) In a about the effort the researchers write that their aim is to offer a simple tool for consumers to analyze the network traffic of their Internet connected gizmos. The basic idea is to help people see whether devices such as smart speakers or wi-fi enabled robot vacuum cleaners are sharing their data with third parties. (Or indeed how much snitching their gadgets are doing.) Testing the IoT Inspector tool in their lab the researchers say they found a Chromecast device constantly contacting Google’s servers even when not in active use. A Geeni smart bulb was also found to be constantly communicating with the cloud — sending/receiving traffic via a URL (tuyaus.com) that’s operated by a China-based company with a platform which controls IoT devices. There are other ways to track devices like this — such as setting up a wireless hotspot to sniff IoT traffic using a packet analyzer like WireShark. But the level of technical expertise required makes them difficult for plenty of consumers. Whereas the researchers say their web app doesn’t require any special hardware or complicated set-up so it sounds easier than trying to go packet sniffing your devices yourself. (, which got an early look at the tool, describes it as “incredibly easy to install and use”.) One wrinkle: The web app doesn’t work with Safari; requiring either Firefox or Google Chrome (or a Chromium-based browser) to work. The main caveat is that the team at Princeton do want to use the gathered data to feed IoT research — so users of the tool will be contributing to efforts to study smart home devices. The title of their research project is Identifying Privacy, Security, and Performance Risks of Consumer IoT Devices. The listed principle investigators are professor Nick Feamster and PhD student Danny Yuxing Huang at the university’s Computer Science department. The Princeton team says it intends to study privacy and security risks and network performance risks of IoT devices. But they also note they may share the full dataset with other non-Princeton researchers after a standard research ethics approval process. So users of IoT Inspector will be participating in at least one research project. (Though the tool also lets you delete any collected data — per device or per account.) “With IoT Inspector, we are the first in the research community to produce an open-source, anonymized dataset of actual IoT network traffic, where the identity of each device is labelled,” the researchers write. “We hope to invite any academic researchers to collaborate with us — e.g., to analyze the data or to improve the data collection — and advance our knowledge on IoT security, privacy, and other related fields (e.g., network performance).” They have produced an extensive which anyone thinking about running the tool should definitely read before getting involved with a piece of software that’s explicitly designed to spy on your network traffic. (tl;dr, they’re using ARP-spoofing to intercept traffic data — a technique they warn may slow your network, in addition to the risk of their software being buggy.) The dataset that’s being harvesting by the traffic analyzer tool is anonymized and the researchers specify they’re not gathering any public-facing IP addresses or locations. But there are still some privacy risks — such as if you have smart home devices you’ve named using your real name. So, again, do read the FAQ carefully if you want to participate. For each IoT device on a network the tool collects multiple data-points and sends them back to servers at Princeton University — including DNS requests and responses; destination IP addresses and ports; hashed MAC addresses; aggregated traffic statistics; TLS client handshakes; and device manufacturers. The tool has been designed not to track computers, tablets and smartphones by default, given the study focus on smart home gizmos. Users can also manually exclude individual smart devices from being tracked if they’re able to power them down during set up or by specifying their MAC address. Up to 50 smart devices can be tracked on the network where IoT Inspector is running. Anyone with more than 50 devices is asked to contact the researchers to ask for an increase to that limit. The project team has produced a video showing how to install the app on Mac:
Did you fly a drone over Fenway Park? The FAA would like a chat

Did you fly a drone over Fenway Park? The FAA would like a chat

7:53pm, 12th April, 2019
Drones are great. But they are also flying machines that can do lots of stupid and dangerous things. Like, for instance, fly over a major league baseball game packed with spectators. It happened at Fenway Park last night, and the FAA is not happy. The illegal flight took place last night during a Red Sox-Blue Jays game at Fenway; the drone, a conspicuously white DJI Phantom, reportedly first showed up around 9:30 PM, coming and going over the next hour. One of the many fans who of the drone, Chris O’Brien, that “it would kind of drop fast then go back up then drop and spin. It was getting really low and close to the players. At one point it was getting really low and I was wondering are they going to pause the game and whatever, but they never did. Places where flying is regularly prohibited, like airports and major landmarks like stadiums, often have no-fly rules baked into the GPS systems of drones — and that’s the case with DJI. In a statement, however, the company said that “whoever flew this drone over the stadium apparently overrode our geofencing system and deliberately violated the FAA temporary flight restriction in place over the game.” The FAA said that it (and Boston PD) is investigating both to local news and in a tweet explaining why it is illegal. FAA Statement: The FAA is investigating a report that a flew over during the baseball game last night. Flying drones in/around stadiums is prohibited starting 1hr before & ending 1hr after the scheduled game & prohibited within a radius of 3 nm of the stadium. — The FAA (@FAANews) That’s three nautical miles, which is quite a distance, covering much of central Boston. You don’t really take chances when there are tens of thousands of people all gathered in one spot on a regular basis like that. Drones open up some pretty ugly security scenarios. Of course, this wasn’t a mile and a half from Fenway, which might have earned a slap on the wrist, but directly over the park, which as the FAA notes above could lead to hundreds of thousands in fines and actual prison time. It’s not hard to imagine why: If that drone had lost power or caught a gust (or been hit by a fly ball, at that altitude), it could have hurt or killed someone in the crowd. It’s especially concerning when the FAA is working on establishing . You should leave a comment there if you feel strongly about this, by the way. Here’s hoping they catch the idiot who did this. It just goes to show that you can’t trust people to follow the rules, even when they’re coded into a craft’s OS. It’s things like this that make mandatory registration of drones sound like a pretty good idea. (Red Sox won, by the way. But the season’s off to a rough start.) The Inning: Bottom 9The Score: TiedThe Bases: Loaded The Result: — Boston Red Sox (@RedSox)
This little translator gadget could be a traveling reporter’s best friend

This little translator gadget could be a traveling reporter’s best friend

3:34pm, 12th April, 2019
If you’re lucky enough to get to travel abroad, you know it’s getting easier and easier to use our phones and other gadgets to translate for us. So why not do so in a way that makes sense to you? This little gadget seeking funds on looks right up my alley, offering quick transcription and recording — plus music playback, like an iPod Shuffle with superpowers. is really not that complex of a device — a couple of microphones and a wireless board in tasteful packaging — but that combination allows for a lot of useful stuff to happen both offline and with its companion app. You activate the device, and it starts recording and both translating and transcribing the audio via a cloud service as it goes (or later, if you choose). That right there is already super useful for a reporter like me — although you can always put your phone on the table during an interview, this is more discreet, and of course a short-turnaround translation is useful, as well. Recordings are kept on the phone (no on-board memory, alas) and there’s an option for a cloud service, but that probably won’t be necessary, considering the compact size of these audio files. If you’re paranoid about security, this probably isn’t your jam, but for everyday stuff it should be just fine. If you want to translate a conversation with someone whose language you don’t speak, you pick two of the 12 built-in languages in the app and then either pass the gadget back and forth or let it sit between you while you talk. The transcript will show on the phone and the ONE Mini can bleat out the translation in its little robotic voice. Right now translation online only works, but I asked and offline is in the plans for certain language pairs that have reliable two-way edge models, probably Mandarin-English and Korean-Japanese. It has a headphone jack, too, which lets it act as a wireless playback device for the recordings or for your music, or to take calls using the nice onboard mics. It’s lightweight and has a little clip, so it’s probably better than connecting directly to your phone in many cases. There’s also a 24/7 interpreter line that charges two bucks a minute that I probably wouldn’t use. I think I would feel weird about it. But in an emergency it could be pretty helpful to have a panic button that sends you directly to a person who speaks both the languages you’ve selected. I have to say, normally I wouldn’t highlight a random crowdfunded gadget, but I happen to have met the creator of this one, Wells Tu, at one of our events, and trust him and his team to actually deliver. The previous product he worked on was a pair of that worked surprisingly well, so this isn’t their first time shipping a product in this category — that makes a lot of difference for a hardware startup. You can see it in action here: He pointed out in an email to me that obviously wireless headphones are hot right now, but the translation functions aren’t good and battery life is short. This adds a lot of utility in a small package. Right now you can score a , which seems reasonable to me. They’ve already passed their goal and are planning on shipping in June, so it shouldn’t be a long wait.