SpaceX kicks off its space-based internet service tomorrow with 60-satellite Starlink launch

SpaceX kicks off its space-based internet service tomorrow with 60-satellite Starlink launch

3:47pm, 14th May, 2019
As wild as it sounds, the race is on to build a functioning space internet — and SpaceX is taking its biggest step yet with the launch of 60 (!) satellites tomorrow that will form the first wave of its Starlink constellation. It’s a hugely important and incredibly complex launch for the company — and should be well worth launching. A Falcon 9 with the flat Starlink test satellites (they’re “production design” but not final hardware) is vertical at launchpad 40 in Cape Canaveral. It has completed its static fire test and should have a window for launch tomorrow, weather permitting. Building satellite constellations hundreds or thousands strong is seen by several major companies and investors as the next major phase of connectivity — though it will take years and billions of dollars to do so. OneWeb, perhaps SpaceX’s biggest competitor in this area, just in funding after in March of a planned 650. Jeff Bezos has announced that Amazon will join the fray with the proposed 3,236-satellite Project Kuiper. Ubiquitilink has . And plenty of others are taking on smaller segments, like lower-cost or domain-specific networks. Needless to say it’s an exciting sector, but today’s launch is a particularly interesting one because it is so consequential for SpaceX. If this doesn’t go well, it could set Starlink’s plans back long enough to give competitors an edge. The satellites stacked inside the Falcon 9 payload fairing. “Tight fit,” pointed out CEO Elon Musk. SpaceX hasn’t explained exactly how the 60 satellites will be distributed to their respective orbits, but founder and CEO Elon Musk did note on Twitter that there’s “no dispenser.” Of course there must be some kind of dispenser — these things aren’t going to just jump off of their own accord. They’re stuffed in there like kernels on a corncob, and likely each have . A pair of prototype satellites, Tintin-A and B, have been in orbit since early last year, and have no doubt furnished a great deal of useful information to the Starlink program. But the 60 aboard tomorrow’s launch aren’t quite final hardware. Although Musk noted that they are “production design,” COO Gwynne Shotwell has said that they are still test models. “This next batch of satellites will really be a demonstration set for us to see the deployment scheme and start putting our network together,” she said at the Satellite 2019 conference in Washington, D.C. — they reportedly lack inter-satellite links but are otherwise functional. I’ve asked SpaceX for more information on this. It makes sense: If you’re planning to put thousands (perhaps as many as 12,000 eventually) of satellites into orbit, you’ll need to test at scale and with production hardware. And for those worried about the possibility of overpopulation in orbit — it’s absolutely something to consider, but many of these satellites will be ; at 550 kilometers up, these tiny satellites will naturally de-orbit in a handful of years. Even OneWeb’s, at 1,100 km, aren’t that high up — geosynchronous satellites are above 35,000 km. That doesn’t mean there’s no risk at all, but it does mean failed or abandoned satellites won’t stick around for long. Just don’t expect to boot up your Starlink connection any time soon. It would take a minimum of 6 more launches like this one — a total of 420, a happy coincidence for Musk — to provide “minor” coverage. This would likely only be for testing as well, not commercial service. That would need 12 more launches, and dozens more to bring it to the point where it can compete with terrestrial broadband. Even if it will take years to pull off, that is the plan. And by that time others will have spun up their operations as well. It’s an exciting time for space and for connectivity. No launch time has been set as of this writing, so takeoff is just planned for Wednesday the 15th at present. As there’s no need to synchronize the launch with the movement of any particular celestial body, T-0 should be fairly flexible and SpaceX will likely just wait for the best weather and visibility. Delays are always a possibility, though, so don’t be surprised if this is pushed out to later in the week. As always you’ll be able to watch the launch , but I’ll update this post with the live video link as soon as it’s available.
Cat vs best and worst robot vacuum cleaners 

Cat vs best and worst robot vacuum cleaners 

2:13pm, 11th May, 2019
If you’ve flirted with the idea of buying a robot vacuum you may also have stepped back from the brink in unfolding horror at the alphabetic soup of branded discs popping into view. Consumer choice sounds like a great idea until you’ve tried to get a handle on the handle-less vacuum space. Amazon offers an A to Z of “top brands” that’s only a handful of letters short of a full alphabetic set. The horror. What awaits the unseasoned robot vacuum buyer as they resign themselves to hours of online research to try to inform — or, well, form — a purchase decision is a seeming endless permutation of robot vac reviews and round-ups. Unfortunately there are just so many brands in play that all these reviews tend to act as fuel, feeding a growing black hole of indecision that sucks away at your precious spare time, demanding you spend more and more of it reading about robots that suck (when you could, let’s be frank, be getting on with the vacuuming task yourself) — only to come up for air each time even less convinced that buying a robot dirtbag is at all a good idea. Reader, I know, because I fell into this hole. And it was hellish. So in the spirit of trying to prevent anyone else falling prey to convenience-based indecision I am — apologies in advance — adding to the pile of existing literature about robot vacuums with a short comparative account that (hopefully) helps cut through some of the chaff to the dirt-pulling chase. Here’s the bottom line: Budget robot vacuums that lack navigational smarts are simply not worth your money, or indeed your time. Yes, that’s despite the fact they are still actually expensive vacuum cleaners. Basically these models entail overpaying for a vacuum cleaner that’s so poor you’ll still have to do most of the job yourself (i.e. with a non-robotic vacuum cleaner). It’s the very worst kind of badly applied robotics. Abandon hope of getting anything worth your money at the bottom end of the heap. I know this because, alas, I tried — opting, finally and foolishly (but, in my defence, at a point of near desperation after sifting so much virtual chaff the whole enterprise seemed to have gained lottery odds of success and I frankly just wanted my spare time back), for a model sold by a well-known local retailer. It was a budget option but I assumed — or, well, hoped — the retailer had done its homework and picked a better-than-average choice. Or at least something that, y’know, could suck dust. The brand in question (Rowenta) sat alongside the better known (and a bit more expensive) iRobot on the shop shelf. Surely that must count for something? I imagined wildly. Reader, that logic is a trap. I can’t comment on the comparative performance of iRobot’s bots, which I have not personally tested, but I do not hesitate to compare a €180 (~$200) Rowenta-branded robot vacuum to a very expensive cat toy. This robot vacuum was spectacularly successful at entertaining the cat — presumably on account of its dumb disposition, bouncing stupidly off of furniture owing to a total lack of navigational smarts. (Headbutting is a pretty big clue to how stupid a robot it is, as it’s never a stand-in for intelligence even when encountered in human form.) Even more tantalizingly, from the cat’s point of view, the bot featured two white and whisker-like side brushes that protrude and spin at paw-tempting distance. In short: Pure robotic catnip. The cat did not stop attacking the bot’s whiskers the whole time it was in operation. That certainly added to the obstacles getting in its way. But the more existential problem was it wasn’t sucking very much at all. At the end of its first concluded ‘clean’, after it somehow managed to lurch its way back to first bump and finally hump its charging hub, I extracted the bin and had to laugh at the modest sized furball within. I’ve found larger clumps of dust gathering themselves in corners. So: Full marks for cat-based entertainment but as a vacuum cleaner it was horrible. At this point I did what every sensible customer does when confronted with an abject lemon: Returned it for a full refund. And that, reader, might have been that for me and the cat and robot vacs. Who can be bothered to waste so much money and time for what appeared laughably incremental convenience? Even with a steady supply of cat fur to contend with. But as luck would have it a Roborock representative emailed to ask if I would like to review their latest top-of-the-range model — which, at €549, does clock in at the opposite end of the price scale; ~3x the pitiful Rowenta. So of course I jumped at the chance to give the category a second spin — to see if a smarter device could impress me and not just tickle the cat’s fancy. Clearly the price difference here, at the top vs the bottom of the range, is substantial. And yet, if you bought a car that was 3x times cheaper than a Ferrari you’d still expect not just that the wheels stay on but that it can actually get you somewhere, in good time and do so without making you horribly car sick. Turns out buyers of robot vacuums need to tread far more carefully. Here comes the bookending top-line conclusion: Robot vacuums are amazing. A modern convenience marvel. But — and it’s a big one — only if you’re willing to shell out serious cash to get a device that actually does the job intended. Roborock S6: It’s a beast at gobbling your furry friend’s dander Comparing the Roborock S6 and the Rowenta Smart Force Essential Aqua RR6971WH (to give it its full and equally terrible name) is like comparing a high-end electric car with a wind-up kid’s toy. Where the latter product was so penny-pinching the company hadn’t even paid to include in the box a user manual that contained actual words — opting, we must assume, to save on translation costs by producing a comic packed with inscrutable graphics and bizarro don’t do diagrams which only served to cement the fast-cooling buyer’s conviction they’d been sold a total lemon — the Roborock’s box contains a well written paper manual that contains words and clearly labeled diagrams. What a luxury! At the same time there’s not really that much you need to grok to get your head around operating the Roborock. After a first pass to familiarize yourself with its various functions it’s delightfully easy to use. It will even produce periodic vocal updates — such as telling you it’s done cleaning and is going back to base. (Presumably in case you start to worry it’s gone astray under the bed. Or that quiet industry is a front for brewing robotic rebellion against indentured human servitude.) One button starts a full clean — and this does mean full thanks to on-board laser navigation that allows the bot to map the rooms in real-time. This means you get methodical passes, minimal headbutting and only occasional spots missed. (Another button will do a spot clean if the S6 does miss something or there’s a fresh spill that needs tidying — you just lift the bot to where you want it and hit the appropriate spot.) There is an app too, if you want to access extra features like being able to tell it to go clean a specific room, schedule cleans or set no-go zones. But, equally delightfully, there’s no absolute need to hook the bot to your wi-fi just to get it to do its primary job. All core features work without the faff of having to connect it to the Internet — nor indeed the worry of who might get access to your room-mapping data. From a privacy point of view this wi-fi-less app-free operation is a major plus. In a small apartment with hard flooring the only necessary prep is a quick check to clear stuff like charging cables and stray socks off the floor. You can of course park dining chairs on the table to offer the bot a cleaner sweep. Though I found the navigation pretty adept at circling chair legs. Sadly the unit is a little too tall to make it under the sofa. The S6 includes an integrated mopping function, which works incredibly well on lino-style hard flooring (but won’t be any use if you only have carpets). To mop you fill the water tank attachment; velcro-fix a dampened mop cloth to the bottom; and slide-clip the whole unit under the bot’s rear. Then you hit the go button and it’ll vacuum and mop in the same pass. In my small apartment the S6 had no trouble doing a full floor clean in under an hour, without needing to return to base to recharge in the middle. (Roborock says the S6 will drive for up to three hours on a single charge.) It also did not seem to get confused by relatively dark flooring in my apartment — which some reviews had suggested can cause headaches for robot vacuums by confusing their cliff sensors. After that first clean I popped the lid to check on the contents of the S6’s transparent lint bin — finding an impressive quantity of dusty fuzz neatly wadded therein. This was really just robot vacuum porn, though; the gleaming floors spoke for themselves on the quality of the clean. The level of dust gobbled by the S6 vs the Rowenta underlines the quality difference between the bottom and top end of the robot vacuum category. So where the latter’s plastic carapace immediately became a magnet for all the room dust it had kicked up but spectacularly failed to suck, the S6’s gleaming white shell has stayed remarkably lint-free, acquiring only a minimal smattering of cat hairs over several days of operation — while the floors it’s worked have been left visibly dust- and fur-free. (At least until the cat got to work dirtying them again.) Higher suction power, better brushes and a higher quality integrated filter appear to make all the difference. The S6 also does a much better cleaning job a lot more quietly. Roborock claims it’s 50% quieter than the prior model (the S5) and touts it as its quietest robot vacuum yet. It’s not super silent but is quiet enough when cleaning hard floors not to cause a major disturbance if you’re working or watching something in the same room. Though the novelty can certainly be distracting. Even the look of the S6 exudes robotic smarts — with its raised laser-housing bump resembling a glowing orange cylonic eye-slot. Although I was surprised, at first glance, by the single, rather feeble looking side brush vs the firm pair the Rowenta had fixed to its undercarriage. But again the S6’s tool is smartly applied — stepping up and down speed depending on what the bot’s tackling. I found it could miss the odd bit of lint or debris such as cat litter but when it did these specs stood out as the exception on an otherwise clean floor. It’s also true that the cat did stick its paw in again to try attacking the S6’s single spinning brush. But these attacks were fewer and a lot less fervent than vs the Rowenta, as if the bot’s more deliberate navigation commanded greater respect and/or a more considered ambush. So it appears that even to a feline eye the premium S6 looks a lot less like a dumb toy. Cat plots another ambush while the S6 works the floor On a practical front, the S6’s lint bin has a capacity of 480ml. Roborock suggests cleaning it out weekly (assuming you’re using the bot every week), as well as washing the integrated dust filter (it supplies a spare in the box so you can switch one out to clean it and have enough time for it to fully dry before rotating it back into use). If you use the mopping function the supplied reusable mop cloths do need washing afterwards too (Roborock also includes a few disposable alternatives in the box but that seems a pretty wasteful option when it’s easy enough to stick a reusable cloth in with a load of laundry or give it a quick wash yourself). So if you’re chasing a fully automated, robot-powered, end-to-cleaning-chores dream be warned there’s still a little human elbow grease required to keep everything running smoothly. Still, there’s no doubt a top-of-the-range robot vacuum like the S6 will save you time cleaning. If you can justify the not inconsiderable cost involved in buying this extra time by shelling out for a premium robot vacuum that’s smart enough to clean effectively all that’s left to figure out is how to spend your time windfall wisely — resisting the temptation to just put your feet up and watch the clever little robot at work.
Alexa, does the Echo Dot Kids protect children’s privacy?

Alexa, does the Echo Dot Kids protect children’s privacy?

8:06am, 9th May, 2019
A coalition of child protection and privacy groups has filed a complaint with the Federal Trade Commission (FTC) urging it to investigate a kid-focused edition of smart speaker. The complaint against Amazon Echo Dot Kids, which has been lodged with the FTC by groups including the Campaign for a Commercial-Free Childhood, the Center for Digital Democracy and the Consumer Federation of America, argues that the ecommerce giant is violating the Children’s Online Privacy Protection Act (Coppa) — including by failing to obtain proper consents for the use of kids’ data. As with its other smart speaker Echo devices the Echo Dot Kids continually listens for a wake word and then responds to voice commands by recording and processing users’ speech. The difference with this Echo is it’s intended for children to use — which makes it subject to US privacy regulation intended to protect kids from commercial exploitation online. The complaint, which can be read in full via the group’s complaint , argues that Amazon fails to provide adequate information to parents about what personal data will be collected from their children when they use the Echo Dot Kids; how their information will be used; and which third parties it will be shared with — meaning parents do not have enough information to make an informed decision about whether to give consent for their child’s data to be processed. They also accuse Amazon of providing at best “unclear and confusing” information per its obligation under Coppa to also provide notice to parents to obtain consent for children’s information to be collected by third parties via the online service — such as those providing Alexa “skills” (aka apps the AI can interact with to expand its utility). A number of other concerns are also being raised about Amazon’s device with the FTC. Amazon released the Echo Dot Kids — and, as we noted at the time, it’s essentially a brightly bumpered iteration of the company’s standard Echo Dot hardware. There are differences in the software, though. In parallel Amazon updated its Alexa smart assistant — adding parental controls, aka its FreeTime software, to the child-focused smart speaker. Amazon said the free version of FreeTime that comes bundled with the Echo Dot Kids provides parents with controls to manage their kids’ use of the product, including device time limits; parental controls over skills and services; and the ability to view kids’ activity via a parental dashboard in the app. The software also removes the ability for Alexa to be used to make phone calls outside the home (while keeping an intercom functionality). A paid premium tier of FreeTime (called FreeTime Unlimited) also bundles additional kid-friendly content, including Audible books, ad-free radio stations from iHeartRadio Family, and premium skills and stories from the likes of Disney, National Geographic and . At the time it announced the Echo Dot Kids, Amazon said it had tweaked its voice assistant to support kid-focused interactions — saying it had trained the AI to understand children’s questions and speech patterns, and incorporated new answers targeted specifically at kids (such as jokes). But while the company was ploughing resource into adding a parental control layer to Echo and making Alexa’s speech recognition kid-friendly, the Coppa complaint argues it failed to pay enough attention to the data protection and privacy obligations that apply to products targeted at children — as the Echo Dot Kids clearly is. Or, to put it another way, Amazon offers parents some controls over how their children can interact with the product — but not enough controls over how Amazon (and others) can interact with their children’s data via the same always-on microphone. More specifically, the group argues that Amazon is failing to meet its obligation as the operator of a child-directed service to provide notice and obtain consent for third parties operating on the Alexa platform to use children’s data — noting that its Children’s Privacy Disclosure policy states it does not apply to third party services and skills. Instead the complaint says Amazon tells parents they should review the skill’s policies concerning data collection and use. “Our investigation found that only about 15% of kid skills provide a link to a privacy policy. Thus, Amazon’s notice to parents regarding data collection by third parties appears designed to discourage parental engagement and avoid Amazon’s responsibilities under Coppa,” the group writes in a summary of their complaint. They are also objecting to how Amazon is obtaining parental consent — arguing its system for doing so is inadequate because it’s merely asking that a credit or debit/debit gift card number be inputted. “It does not verify that the person “consenting” is the child’s parent as required by Coppa,” they argue. “Nor does Amazon verify that the person consenting is even an adult because it allows the use of debit gift cards and does not require a financial transaction for verification.” Another objection is that Amazon is retaining audio recordings of children’s voices far longer than necessary — keeping them indefinitely unless a parent actively goes in and deletes the recordings, despite Coppa requiring that children’s data be held for no longer than is reasonably necessary. They found that additional data (such as transcripts of audio recordings) was also still retained even after audio recordings had been deleted. A parent must contact Amazon customer service to explicitly request deletion of their child’s entire profile to remove that data residue — meaning that to delete all recorded kids’ data a parent has to nix their access to parental controls and their kids’ access to content provided via FreeTime — so the complaint argues that Amazon’s process for parents to delete children’s information is “unduly burdensome” too. Their investigation also found the company’s process for letting parents review children’s information to be similarly arduous, with no ability for parents to search the collected data — meaning they have to listen/read every recording of their child to understand what has been stored. They further highlights that children’s Echo Dot Kids’ audio recordings can of course include sensitive personal details — such as if a child uses Alexa’s ‘remember’ feature to ask the AI to remember personal data such as their address and contact details or personal health information like a food allergy. The group’s complaint also flags the risk of other children having their data collected and processed by Amazon without their parents consent — such as when a child has a friend or family member visiting on a playdate and they end up playing with the Echo together. Responding to the complaint, Amazon has denied it is in breach of Coppa. In a statement a company spokesperson said: “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA). Customers can find more information on Alexa and overall privacy practices here: .” An Amazon spokesperson also told us it only allows kid skills to collect personal information from children outside of FreeTime Unlimited (i.e. the paid tier) — and then only if the skill has a privacy policy and the developer separately obtains verified consent from the parent, adding that most kid skills do not have a privacy policy because they do not collect any personal information. At the time of writing the FTC had not responded to a request for comment on the complaint. Over in Europe, there has been growing over the use of children’s data by online services. A report by England’s children’s commissioner late last year warned kids are being “datafied”, and suggested profiling at such an early age could lead to a data-disadvantaged generation. Responding to rising concerns the UK privacy regulator launched a on a last month, asking for feedback on 16 proposed standards online services must meet to protect children’s privacy — including requiring that product makers put the best interests of the child at the fore, deliver transparent T&Cs, minimize data use and set high privacy defaults. The UK government has also recently published a Whitepaper setting out a which has a heavy focus on child safety.
Drone sighting at Germany’s busiest airport grounds flights for about an hour

Drone sighting at Germany’s busiest airport grounds flights for about an hour

3:46am, 9th May, 2019
A drone sighting caused all flights to be suspended at Frankfurt Airport for around an hour this morning. The airport is Germany’s busiest by passenger numbers, serving almost 14.8 million passengers in the first three months of this year. In a tweet sent after flights had resumed the airport reported that operations were suspended at 07:27, before the suspension was lifted at 08:15, with flights resuming at 08:18. It added that security authorities were investigating the incident. Drohnensichtung am . Flugbetrieb im Zeitraum von 07:27 bis 08:15 Uhr eingestellt. Aufklärungs- und Fahndungsmaßnahmen der Sicherheitsbehörden wurden umgesetzt. Flugbetrieb seit 08:18 Uhr wieder aufgenommen. Unsere Pressemitteilung folgt. — Bundespolizei Flughafen Frankfurt am Main (@bpol_air_fra) A report in suggests more than 100 takeoffs and landings were cancelled as a result of the disruption caused by the drone sighting. All flights to Frankfurt (FRA) are currently holding or diverting due to drone activity near the airport — International Flight Network (@FlightIntl) It’s the second such incident at the airport after a drone sighting at the end of March also caused flights to be suspended for around half an hour. Drone sightings near airports have been on the increase for years as drones have landed in the market at increasingly affordable prices, as have reports of drone near misses with aircraft. The Frankfurt suspension follows far more major disruption caused by repeat drone sightings at the UK’s second largest airport, Gatwick Airport, — which caused a series of flight shutdowns and travel misery for hundreds of thousands of people right before the holiday period. The UK government came in for trenchant criticism immediately afterwards, with experts saying it had failed to listen and warnings about the risks posed by drone misuse. A planned drone bill has also been long delayed, meaning new legislation to comprehensively regulate drones has slipped. In response to the Gatwick debacle the UK government quickly pushed through an around airports after criticism by aviation experts — beefing up the existing 1km exclusion zone to 5km. It also said to tackle drone misuse. In Germany an amendment to air traffic regulations entered into force in 2017 that prohibits drones being flown within 1.5km of an airport. Drones are also banned from being flown in controlled airspace. However with local press reporting , with the country’s Air Traffic Control registering 125 last year (31 of which were around Frankfurt), the 1.5km limit looks similarly inadequate.
Non-invasive glucose monitor EasyGlucose takes home Microsoft’s Imagine Cup and $100K

Non-invasive glucose monitor EasyGlucose takes home Microsoft’s Imagine Cup and $100K

12:36pm, 8th May, 2019
yearly Imagine Cup student startup competition crowned its latest winner today: , a non-invasive, smartphone-based method for diabetics to test their blood glucose. It and the two other similarly beneficial finalists presented today at Microsoft’s Build developers conference. The Imagine Cup brings together winners of many local student competitions around the world with a focus on social good and, of course, Microsoft services like Azure. Last year’s winner was a smart prosthetic forearm that uses a camera in the palm to identify the object it is meant to grasp. (They were on hand today as well, with an improved prototype.) The three finalists hailed from the U.K., India, and the U.S.; EasyGlucose was a one-person team from my alma mater UCLA. EasyGlucose takes advantage of machine learning’s knack for spotting the signal in noisy data, in this case the tiny details of the eye’s iris. It turns out, as creator Brian Chiang explained in his presentation, that the iris’s “ridges, crypts, and furrows” hide tiny hints as to their owner’s blood glucose levels. EasyGlucose presents at the Imagine Cup finals. These features aren’t the kind of thing you can see with the naked eye (or rather, on the naked eye), but by clipping a macro lens onto a smartphone camera Chiang was able to get a clear enough image that his computer vision algorithms were able to analyze them. The resulting blood glucose measurement is significantly better than any non-invasive measure and more than good enough to serve in place of the most common method used by diabetics: stabbing themselves with a needle every couple hours. Currently EasyGlucose gets within 7 percent of the pinprick method, well above what’s needed for “clinical accuracy,” and Chiang is working on closing that gap. No doubt this innovation will be welcomed warmly by the community, as well as the low cost: $10 for the lens adapter, and $20 per month for continued support via the app. It’s not a home run, or not just yet: Naturally, a technology like this can’t go straight from the lab (or in this case the dorm) to global deployment. It needs FDA approval first, though it likely won’t have as protracted a review period as, say, a new cancer treatment or surgical device. In the meantime, EasyGlucose has a patent pending, so no one can eat its lunch while it navigates the red tape. As the winner, Chiang gets $100,000, plus $50,000 in Azure credit, plus the coveted one-on-one mentoring session with Microsoft CEO Satya Nadella. The other two Imagine Cup finalists also used computer vision (among other things) in service of social good. Caeli is taking on the issue of air pollution by producing custom high-performance air filter masks intended for people with chronic respiratory conditions who have to live in polluted areas. This is a serious problem in many places that cheap or off-the-shelf filters can’t really solve. It uses your phone’s front-facing camera to scan your face and pick the mask shape that makes the best seal against your face. What’s the point of a high-tech filter if the unwanted particles just creep in the sides? Part of the mask is a custom-designed compact nebulizer for anyone who needs medication delivered in mist form, for example someone with asthma. The medicine is delivered automatically according to the dosage and schedule set in the app — which also tracks pollution levels in the area so the user can avoid hot zones. Finderr is an interesting solution to the problem of visually impaired people being unable to find items they’ve left around their home. By using a custom camera and computer vision algorithm, the service watches the home and tracks the placement of everyday items: keys, bags, groceries, and so on. Just don’t lose your phone, since you’ll need that to find the other stuff. You call up the app and tell it (by speaking) what you’re looking for, then the phone’s camera it determines your location relative to the item you’re looking for, giving you audio feedback that guides you to it in a sort of “getting warmer” style, and a big visual indicator for those who can see it. After their presentations, I asked the creators a few questions about upcoming challenges, since as is usual in the Imagine Cup, these companies are extremely early stage. Right now EasyGlucose is working well but Chiang emphasized that the model still needs lots more data and testing across multiple demographics. It’s trained on 15,000 eye images but many more will be necessary to get the kind of data they’ll need to present to the FDA. Finderrr recognizes all the images in the widely used ImageNet database, but the team’s Ferdinand Loesch pointed out that others can be added very easily with 100 images to train with. As for the upfront cost, the U.K. offers a 500-pound grant to visually-impaired people for this sort of thing, and they engineered the 360-degree ceiling-mounted camera to minimize the number needed to cover the home. Caeli noted that the nebulizer, which really is a medical device in its own right, is capable of being sold and promoted on its own, perhaps licensed to medical device manufacturers. There are other smart masks coming out, but he had a pretty low opinion of them (not strange in a competitor but there isn’t some big market leader they need to dethrone). He also pointed out that in the target market of India (from which they plan to expand later) isn’t as difficult to get insurance to cover this kind of thing. While these are early-stage companies, they aren’t hobbies — though admittedly many of their founders are working on them between classes. I wouldn’t be surprised to hear more about them and others from Imagine Cup pulling in funding and hiring in the next year.
Samsung spilled SmartThings app source code and secret keys

Samsung spilled SmartThings app source code and secret keys

8:16am, 8th May, 2019
A development lab used by Samsung engineers was leaking highly sensitive source code, credentials and secret keys for several internal projects — including its platform, a security researcher found. The electronics giant left dozens of internal coding projects on a instance hosted on a Samsung-owned domain, Vandev Lab. The instance, used by staff to share and contribute code to various Samsung apps, services and projects, was spilling data because the projects were set to “public” and not properly protected with a password, allowing anyone to look inside at each project, access, and download the source code. , a security researcher at Dubai-based cybersecurity firm SpiderSilk who discovered the exposed files, said one project contained credentials that allowed access to the entire AWS account that was being used, including over a hundred S3 storage buckets that contained logs and analytics data. Many of the folders, he said, contained logs and analytics data for Samsung’s SmartThings and Bixby services, but also several employees’ exposed stored in plaintext, which allowed him to gain additional access from 42 public projects to 135 projects, including many private projects. Samsung told him some of the files were for testing but Hussein challenged the claim, saying source code found in the GitLab repository contained the same code as the app, published in Google Play on April 10. The app, which has since been updated, has to date. “I had the private token of a user who had full access to all 135 projects on that GitLab,” he said, which could have allowed him to make code changes using a staffer’s own account. Hussein shared several screenshots and a video of his findings for TechCrunch to examine and verify. The exposed GitLab instance also contained private certificates for Samsung’s SmartThings’ iOS and Android apps. Hussein also found several internal documents and slideshows among the exposed files. “The real threat lies in the possibility of someone acquiring this level of access to the application source code, and injecting it with malicious code without the company knowing,” he said. Through exposed private keys and tokens, Hussein documented a vast amount of access that if obtained by a malicious actor could have been “disastrous,” he said. A screenshot of the exposed AWS credentials, allowing access to buckets with GitLab private tokens. (Image: supplied). Hussein, a white-hat hacker and data breach discoverer, reported the findings to Samsung on April 10. In the days following, Samsung began revoking the AWS credentials but it’s not known if the remaining secret keys and certificates were revoked. Samsung still hasn’t closed the case on Hussein’s vulnerability report, close to a month after he first disclosed the issue. “Recently, an individual security researcher reported a vulnerability through our security rewards program regarding one of our testing platforms,” Samsung spokesperson Zach Dugan told TechCrunch when reached prior to publication. “We quickly revoked all keys and certificates for the reported testing platform and while we have yet to find evidence that any external access occurred, we are currently investigating this further.” Hussein said Samsung took until April 30 to revoke the GitLab private keys. Samsung also declined to answer specific questions we had and provided no evidence that the Samsung-owned development environment was for testing. Hussein is no stranger to reporting security vulnerabilities. He recently disclosed , an anonymous social networking site popular among Silicon Valley employees — and found a server for scientific journal giant Elsevier. Samsung’s data leak, he said, was his biggest find to date. “I haven’t seen a company this big handle their infrastructure using weird practices like that,” he said. Read more:
Live transcription and captioning in Android are a boon to the hearing-impaired

Live transcription and captioning in Android are a boon to the hearing-impaired

2:57pm, 7th May, 2019
A set of new features for Android could alleviate some of the difficulties of living with hearing impairment and other conditions. Live transcription, captioning, and relay use speech recognition and synthesis to make content on your phone more accessible — in real time. Announced today at I/O event in a surprisingly long segment on accessibility, the features all rely on improved speech-to-text and text-to-speech algorithms, some of which now run on-device rather than sending audio to a datacenter to be decoded. The first feature to be highlighted, live transcription, was already mentioned by Google before. It’s a simple but very useful tool: open the app and the device will listen to its surroundings and simply display any speech it recognizes as text on the screen. We’ve seen this in translator apps and devices, like the , and the meeting transcription highlighted yesterday at Microsoft Build. One would think that such a straightforward tool is long overdue, but in fact everyday circumstances like talking to a couple friends at a cafe, can be remarkably difficult for natural language systems trained on perfectly recorded single-speaker audio. Improving the system to the point where it can track multiple speakers and display accurate transcripts quickly has no doubt been a challenge. Another feature enabled by this improved speech recognition ability is live captioning, which essentially does the same thing as above, but for video. Now when you watch a YouTube video, listen to a voice message, or even take a video call, you’ll be able to see what the person in it is saying, in real time. That should prove incredibly useful not just for the millions of people who can’t hear what’s being said, but also those who don’t speak the language well and could use text support, or anyone watching a show on mute when they’re supposed to be going to sleep, or any number of other circumstances where hearing and understanding speech just isn’t the best option. Captioning phone calls is something CEO Sundar Pichai said is still under development, but the “live relay” feature they demoed on stage showed how it might work. A person who is hearing-impaired or can’t speak will certainly find an ordinary phone call to be pretty worthless. But live relay turns the call immediately into text, and immediately turns text responses into speech the person on the line can hear. Live captioning should be available on Android Q when it releases, with some device restrictions. is available now but a warning states that it is currently in development. Live relay is yet to come, but showing it on stage in such a complete form suggests it won’t be long before it appears.
OnePlus CEO Pete Lau will discuss the future of mobile at Disrupt SF

OnePlus CEO Pete Lau will discuss the future of mobile at Disrupt SF

12:47pm, 7th May, 2019
Founded in late 2013, did the impossible, coming seemingly out of nowhere to take on some of the biggest players in mobile. The company has made a name by embracing a fawning fanbase and offering premium smartphone features at budget pricing, even as the likes of Samsung and Apple routinely crack the $1,000 barrier on their own flagships. history is awash with clever promotions and fan service, all while exceeding expectations in markets like the U.S., where fellow Chinese smartphone makers have run afoul of U.S. regulations. The company’s measured approach to embracing new features has won a devoted fantasied among Android users. Over the past year, however, the company has looked to bleeding edge technology as a way forward. OnePlus was one of the first to embrace In-Display fingerprint sensors with last year’s 6T and has promised to be among the first to offer 5G on its handsets later this year. CEO formed the company with fellow Oppo employee Carl Pei. The pair have turned the company into arguably the most exciting smartphone manufacturer in the past decade. OnePlus has big plans on the horizon, too, including further expansion into the Indian market and the arrival of its first TV set in the coming year. At Disrupt SF (which runs October 2 to October 4), Lau will discuss OnePlus’ rapid accent and its plans for the future. Tickets are available .
Marshall continues to impress with new retro portable speakers

Marshall continues to impress with new retro portable speakers

10:47am, 6th May, 2019
Marshall, the headphone company and not the loudspeaker company of the same vintage, today announced two new portable speakers. Like the company’s previous offerings, these speakers ooze a retro vibe. The two new speakers, the Stockwell II and Tufton, , but stand tall, literally and figuratively, apart from the rest of Marshall’s speakers as portable models with a vertical orientation, internal batteries, wireless capabilities and a rugged casing that should survive a trip outside. The large Tufton impresses with clear, powerful sound even when on battery. The highs carry over a solid low-end. It’s heavy. This isn’t a speaker you want to take backpacking, but, if you did, the casing has an IPX4 water-resistant rating, so it’s tough enough to handle most weather. Marshall says the battery lasts up to six hours. The smaller Stockwell II is much smaller. The little speaker is about the size of an iPad Mini, though as thick as a phone book. The internal battery is good for four hours and the casing is still tough, though sports an IPX2 rating, so it’s not as durable as the Tufton. The speaker is a bit smaller and the music quality is as well. The Stockwell II is a great personal speaker, but it doesn’t produce a pounding sound like the Tufton. Use the Stockwell II for a quiet campfire and the Tufton for a backwoods bonfire. Sadly, these speakers lack Google Assistant or Amazon Alexa integration. Users either have to connect a device through a 3.5mm port or Bluetooth. I’ve been a fan of every Marshall speaker I’ve tried. For my money, they feature a great balance of sound and classic design. Each one I’ve tried lives up to the Marshall name and these two new speakers are no different. Portability doesn’t come cheap. These speakers cost a bit more than their stationary counterparts. The small Stockwell II retails for $249 while the large Tufton is $399.
Life-size robo-dinosaur and ostrich backpack hint at how first birds got off the ground

Life-size robo-dinosaur and ostrich backpack hint at how first birds got off the ground

8:17pm, 2nd May, 2019
Everyone knows birds descended from dinosaurs, but exactly how that happened is the subject of much study and debate. To help clear things up, these researchers went all out and just straight up built a robotic dinosaur to test their theory: that these proto-birds flapped their “wings” well before they ever flew. Now, this isn’t some hyper-controversial position or anything. It’s pretty reasonable when you think about it: natural selection tends to emphasize existing features rather than invent them from scratch. If these critters had, say, moved from being quadrupedal to being bipedal and had some extra limbs up front, it would make sense that over a few million years those limbs would evolve into something useful. But when did it start, and how? To investigate, Jing-Shan Zhao of Tsinghua University in Beijing looked into an animal called Caudipteryx, a ground-dwelling animal with “feathered forelimbs that could be considered “proto-wings.” Based on the well-preserved fossil record of this bird-dino crossover, the researchers estimated a number of physiological metrics, such as the creature’s top speed and the rhythm with which it would run. From this they could estimate forces on other parts of the body — just as someone studying a human jogger would be able to say that such and such a joint is under this or that amount of stress. What they found was that, in theory, these “natural frequencies” and biophysics of the Caudipteryx’s body would cause its little baby wings to flap up and down in a way suggestive of actual flight. Of course they wouldn’t provide any lift, but this natural rhythm and movement may have been the seed which grew over generations into something greater. To give this theory a bit of practical punch, the researchers then constructed a pair of unusual mechanical items: a pair of replica Caudipteryx wings for a juvenile ostrich to wear, and a robotic dinosaur that imitated the original’s gait. A bit fanciful, sure — but why shouldn’t science get a little crazy now and then? In the case of the ostrich backpack, they literally just built a replica of the dino-wings and attached it to the bird, then had the bird run. Sensors on board the device verified what the researchers observed: that the wings flapped naturally as a result of the body’s motion and vibrations from the feet impacting the ground. The robot is a life-size reconstruction based on a complete fossil of the animal, made of 3D-printed parts, to which the ostrich’s fantasy wings could also be affixed. The researchers’ theoretical model predicted that the flapping would be most pronounced as the speed of the bird approached 2.31 meters per second — and that’s just what they observed in the stationary model imitating gaits corresponding to various running speeds. You can see another gif . As the researchers summarize: These analyses suggest that the impetus of the evolution of powered flight in the theropod lineage that lead to Aves may have been an entirely natural phenomenon produced by bipedal motion in the presence of feathered forelimbs. Just how legit is this? Well, I’m not a paleontologist. And an ostrich isn’t a Caudipteryx. And the robot isn’t exactly convincing to look at. We’ll let the scholarly community pass judgment on this paper and its evidence (don’t worry, it’s been peer-reviewed), but I think it’s fantastic that the researchers took this route to test their theory. A few years ago this kind of thing would have been far more difficult to do, and although it seems a little silly when you watch it (especially in gif form), there’s a lot to be said for this kind of real-life tinkering when so much of science is occurring in computer simulations. The paper was .
Blue Origin lofts NASA and student experiments in New Shepard tomorrow morning

Blue Origin lofts NASA and student experiments in New Shepard tomorrow morning

1:59pm, 1st May, 2019
The 11th mission for New Shepard suborbital launch vehicle is slated for takeoff Tuesday morning. The craft will be carrying 38 (!) experimental payloads from NASA, students, and research organizations around the world. You’ll be able to watch the launch live tomorrow at about 6 AM Pacific time. New Shepard, though a very different beast from the Falcon 9 and Heavy launch vehicles created by its rival SpaceX, is arguably a better platform for short-duration experiments that need to be exposed to launch stresses and microgravity. Launching satellites — that’s a job for Falcons and Deltas, or perhaps Blue Origin’s impending , and they’re welcome to it. But researchers around the country are clamoring for spots on suborbital flights and Blue Origin is happy to provide them. We are targeting the next launch of tomorrow May 2nd at 8:30 am CDT / 13:30 UTC. The mission will take 38 microgravity research payloads to space. Watch the launch live at — Blue Origin (@blueorigin) Tomorrow’s launch will be carrying several dozen, some of which will have been waiting years for their chance to board a rocket. Here are a few examples of what will be tested during the short flight: : As more people go into space, we have to be prepared for more and graver injuries. Lots of standard medical tools won’t work properly in microgravity, so it’s necessary to redesign and test them under those conditions. This one is about providing suction, as you might guess, which can be used for lung injuries, drawing blood, and other situations that call for negative air pressure. This little guy will be doing microgravity test prints using metal. : Simply everyone knows we can 3D print stuff in space. But just as on Earth, you can’t always make your spare parts out of thermoplastic. Down here we use metal-based 3D printers, and this experiment aims to find out if a modified design will allow for metal printing in space as well. : It sounds like something the Enterprise would deploy in Star Trek, but it’s just a test bed for a new type of centrifuge that could help simulate other gravities, such as that of the Moon or Mars, for purposes of experiments. They do this on the ISS already but this would make it more compact and easier to automate, saving time and space aboard any craft it flies on. The suborbital centrifuge, looking as cool as it sounds. : The largest ever study of space-based health and the effects of microgravity on the human body was just concluded, but there’s much, much more to know. Part of that requires monitoring cells in real time — which like most things is easier to do on the surface. This lab-on-a-chip will test out a new technique for containing individual cells or masses and tracking changes to them in a microgravity environment. It’s all made possible through , which is specifically all about putting small experiments aboard commercial spacecraft. The rest of the many gadgets and experiments awaiting launch are . The launch itself should be very similar to previous New Shepards, just like one commercial jet takeoff is like another. The booster fires up and ascends to just short of the Karman line at 100 kilometers, which (somewhat arbitrarily) marks the start of “space.” At that point the capsule will detach and fly upwards with its own momentum, exposing the payloads within to several minutes of microgravity; after it tops out, it will descend and deploy its parachutes, after which it will drift leisurely to the ground. Meanwhile the rocket will have descended as well and made a soft landing on its deployable struts. The launch is scheduled for 6:30 AM Pacific time — 8:30 AM Central in Texas, at Blue Origin’s launch site. You’ll be able to watch it live .
Google’s Wear OS gets tiles

Google’s Wear OS gets tiles

11:49am, 1st May, 2019
an interesting new feature today that makes a number of highly used features more easily available. Google calls this feature ’tiles’ and it makes both information like the local weather forecast, headlines, your next calendar event, goals and your heart rate, as well as tools like the Wear OS built-in timer available with just a few swipes to the left. In the most recent version of Wear OS, tiles also existed in some form, but the only available tile was Google Fit, which opened with a single swipe. Now, you’ll be able to swipe further and bring up these new tiles, too. There is a default order to these tiles, but you’ll be able to customize them, too. All you have to do is touch and hold a given tile and then drag it to the left or right. Over time, Google will also add more tiles to this list. The new tiles will start rolling out to all Wear OS smartwatches over the course of the next months. Some features may not be available on all devices, though (if your watch doesn’t have a heart rate monitor, you obviously won’t see that tile, for example). Overall, this looks like a smart update to the Wear OS platform, which now features four clearly delineated quadrants. Swiping down brings up settings, swiping up brings up your notifications, swiping right brings up the Google Assistant and swiping left shows tiles. Using the left swipe only for Google Fit always felt oddly limited, but with this update, that decision makes more sense.
Amazon is testing a Spanish-language Alexa experience in the US ahead of a launch this year

Amazon is testing a Spanish-language Alexa experience in the US ahead of a launch this year

6:41pm, 29th April, 2019
announced today it has begun to ask customers to participate in a preview program that will help the company build a Spanish-language Alexa experience for U.S. users. The program, which is currently invite-only, will allow Amazon to incorporate into the U.S. Spanish-language experience a better understanding of things like word choice and local humor, as it has done with prior language launches in other regions. In addition, developers have been invited to begin building Spanish-language skills, also starting today, using the Alexa Skills Kit. The latter was , noting that any skills created now will be made available to the customers in the preview program for the time being. They’ll then roll out to all customers when Alexa launches in the U.S. with Spanish-language support later this year. Manufacturers who want to build “Alexa Built-in” products for Spanish-speaking customers can also now request early access to a related Alexa Voice Services (AVS) developer preview. Amazon says that Bose, Facebook and Sony are preparing to do so, while smart home device makers, including Philips, TP Link and Honeywell Home, will bring to U.S. users “Works with Alexa” devices that support Spanish. Ahead of today, Alexa had supported Spanish language skills, but only in Spain and Mexico — not in the U.S. Those developers can opt to to U.S. customers, Amazon says. In addition to Spanish, developers have also been able to create skills in English in the U.S., U.K., Canada, Australia, and India; as well as in German, Japanese, French (in France and in Canada), and Portuguese (in Brazil). But on the language front, Google has had a decided advantage thanks to its work with Google Voice Search and Google Translate over the years. Last summer, for Spanish, in addition to launching the device in Spain and Mexico. Amazon also trails Apple in terms of support for Spanish in the U.S., as in the U.S., Spain and Mexico in September 2018. Spanish is a widely spoken language in the U.S. According to a by Instituto Cervantes, the United States has the second highest concentration of Spanish speakers in the world, following Mexico. At the time of the report, there were 53 million people who spoke Spanish in the U.S. — a figure that included 41 million native Spanish speakers, and approximately 11.6 million bilingual Spanish speakers.
Why did last night’s ‘Game of Thrones’ look so bad? Here comes the science!

Why did last night’s ‘Game of Thrones’ look so bad? Here comes the science!

4:32pm, 29th April, 2019
Last night’s episode of “Game of Thrones” was a wild ride and inarguably one of an epic show’s more epic moments — if you could see it through the dark and the blotchy video. It turns out even one of the most expensive and meticulously produced shows in history can fall prey to the scourge of low quality streaming and bad TV settings. The good news is this episode is going to look amazing on Blu-ray or potentially in future, better streams and downloads. The bad news is that millions of people already had to see it in a way its creators surely lament. You deserve to know why this was the case. I’ll be simplifying a bit here because this topic is immensely complex, but here’s what you should know. (By the way, I can’t entirely avoid spoilers, but I’ll try to stay away from anything significant in words or images.) It was clear from the opening shots in last night’s episode, “The Longest Night,” that this was going to be a dark one. The army of the dead faces off against the allied living forces in the darkness, made darker by a bespoke storm brought in by, shall we say, a Mr. N.K., to further demoralize the good guys. If you squint you can just make out the largest army ever assembled Thematically and cinematographically, setting this chaotic, sprawling battle at night is a powerful creative choice and a valid one, and I don’t question the showrunners, director, and so on for it. But technically speaking, setting this battle at night, and in fog, is just about the absolute worst case scenario for the medium this show is native to: streaming home video. Here’s why. Compression factor Video has to be compressed in order to be sent efficiently over the internet, and although we’ve made enormous strides in video compression and the bandwidth available to most homes, there are still fundamental limits. The master video that HBO put together from the actual footage, FX, and color work that goes into making a piece of modern media would be huge: hundreds of gigabytes if not terabytes. That’s because the master has to include all the information on every pixel in every frame, no exceptions. Imagine if you tried to “stream” a terabyte-sized TV episode. You’d have to be able to download upwards of 200 megabytes per second for the full 80 minutes of this one. Few people in the world have that kind of connection — it would basically never stop buffering. Even 20 megabytes per second is asking too much by a long shot. 2 is doable — slightly under the 25 megabit speed (that’s bits… divide by 8 to get bytes) we use to define broadband download speeds. So how do you turn a large file into a small one? Compression — we’ve been doing it for a long time, and video, though different from other types of data in some ways, is still just a bunch of zeroes and ones. In fact it’s especially susceptible to strong compression because of how one video frame is usually very similar to the last and the next one. There are all kinds of shortcuts you can take that reduce the file size immensely without noticeably impacting the quality of the video. These compression and decompression techniques fit into a system called a “codec.” But there are exceptions to that, and one of them has to do with how compression handles color and brightness. Basically, when the image is very dark, it can’t display color very well. The color of winter Think about it like this: There are only so many ways to describe colors in a few words. If you have one word you can say red, or maybe ochre or vermilion depending on your interlocutor’s vocabulary. But if you have two words you can say dark red, darker red, reddish black, and so on. The codec has a limited vocabulary as well, though its “words” are the numbers of bits it can use to describe a pixel. This lets it succinctly describe a huge array of colors with very little data by saying, this pixel has this bit value of color, this much brightness, and so on. (I didn’t originally want to get into this, but this is what people are talking about when they say bit depth, or even “highest quality pixels.) But this also means that there are only so many gradations of color and brightness it can show. Going from a very dark grey to a slightly lighter grey, it might be able to pick 5 intermediate shades. That’s perfectly fine if it’s just on the hem of a dress in the corner of the image. But what if the whole image is limited to that small selection of shades? Then you get what we see last night. See how Jon (I think) is made up almost entirely of only a handful of different colors (brightnesses of a similar color, really) in with big obvious borders between them? This issue is called “banding,” and it’s hard not to notice once you see how it works. Images on video can be incredibly detailed, but places where there are subtle changes in color — often a clear sky or some other large but mild gradient — will exhibit large stripes as the codec goes from “darkest dark blue” to “darker dark blue” to “dark blue,” with no “darker darker dark blue” in between. Check out this image. Above is a smooth gradient encoded with high color depth. Below that is the same gradient encoded with lossy JPEG encoding — different from what HBO used, obviously, but you get the idea. Banding has plagued streaming video forever, and it’s hard to avoid even in major productions — it’s just a side effect of representing color digitally. It’s especially distracting because obviously our eyes don’t have that limitation. A high-definition screen may actually show more detail than your eyes can discern from couch distance, but color issues? Our visual systems flag them like crazy. You can minimize it, but it’s always going to be there, until the point when we have as many shades of grey as we have pixels on the screen. So back to last night’s episode. Practically the entire show took place at night, which removes about 3/4 of the codec’s brightness-color combos right there. It also wasn’t a particularly colorful episode, a directorial or photographic choice that highlighted things like flames and blood, but further limited the ability to digitally represent what was on screen. It wouldn’t be too bad if the background was black and people were lit well so they popped out, though. The last straw was the introduction of the cloud, fog, or blizzard, whatever you want to call it. This kept the brightness of the background just high enough that the codec had to represent it with one of its handful of dark greys, and the subtle movements of fog and smoke came out as blotchy messes (often called “compression artifacts” as well) as the compression desperately tried to pick what shade was best for a group of pixels. Just brightening it doesn’t fix things, either — because the detail is already crushed into a narrow range of values, you just get a bandy image that never gets completely black, making it look washed out, as you see here: (Anyway, the darkness is a stylistic choice. You may not agree with it, but that’s how it’s supposed to look and messing with it beyond making the darkest details visible could be counterproductive.) Now, it should be said that compression doesn’t have to be this bad. For one thing, the more data it is allowed to use, the more gradations it can describe, and the less severe the banding. It’s also possible (though I’m not sure where it’s actually done) to repurpose the rest of the codec’s “vocabulary” to describe a scene where its other color options are limited. That way the full bandwidth can be used to describe a nearly monochromatic scene even though strictly speaking it should be only using a fraction of it. But neither of these are likely an option for HBO: Increasing the bandwidth of the stream is costly, since this is being sent out to tens of millions of people — a bitrate increase big enough to change the quality would also massively swell their data costs. When you’re distributing to that many people, that also introduces the risk of hated buffering or errors in playback, which are obviously a big no-no. It’s even possible that HBO lowered the bitrate because of network limitations — “Game of Thrones” really is on the frontier of digital distribution. And using an exotic codec might not be possible because only commonly used commercial ones are really capable of being applied at scale. Kind of like how we try to use standard parts for cars and computers. This episode almost certainly looked fantastic in the mastering room and FX studios, where they not only had carefully calibrated monitors with which to view it but also were working with brighter footage (it would be darkened to taste by the colorist) and less or no compression. They might not even have seen the “final” version that fans “enjoyed.” We’ll see the better copy eventually, but in the meantime the choice of darkness, fog, and furious action meant the episode was going to be a muddy, glitchy mess on home TVs. And while we’re on the topic… You mean it’s not my TV? Well… to be honest, it might be that too. What I can tell you is that simply having a “better” TV by specs, such as 4K or a higher refresh rate or whatever, would make almost no difference in this case. Even built-in de-noising and de-banding algorithms would be hard pressed to make sense of “The Long Night.” And one of the best new display technologies, OLED, might even make it look worse! Its “true blacks” are much darker than an LCD’s backlit blacks, so the jump to the darkest grey could be way more jarring. That said, it’s certainly possible that your TV is also set up poorly. Those of us sensitive to this kind of thing spend forever fiddling with settings and getting everything just right for exactly this kind of situation. There are dozens of us! Now who’s “wasting his time” calibrating his TV? — John Siracusa (@siracusa) Usually “calibration” is actually a pretty simple process of making sure your TV isn’t on the absolute worst settings, which unfortunately many are out of the box. Here’s a very basic three-point guide to “calibrating” your TV: Go through the “picture” or “video” menu and turn off anything with a special name, like “TrueMotion,” “Dynamic motion,” “Cinema mode,” or anything like that. Most of these make things look worse, especially anything that “smooths” motion. Turn those off first and never ever turn them on again. Don’t mess with brightness, gamma, color space, anything you have to turn up or down from 50 or whatever. Figure out lighting by putting on a good, well-shot movie in the situation you usually watch stuff — at night maybe, with the hall light on or whatever. While the movie is playing, click through any color presets your TV has. These are often things like “natural,” “game,” “cinema,” “calibrated,” and so on and take effect right away. Some may make the image look too green, or too dark, or whatever. Play around with it and whichever makes it look best, use that one. You can always switch later – I myself switch between a lighter and darker scheme depending on time of day and content. Don’t worry about HDR, dynamic lighting, and all that stuff for now. There’s a lot of hype about these technologies and they are still in their infancy. Few will work out of the box and the gains may or may not be worth it. The truth is a well shot movie from the ’60s or ’70s can look just as good today as a “high dynamic range” show shot on the latest 8K digital cinema rig. Just focus on making sure the image isn’t being actively interfered with by your TV and you’ll be fine. Unfortunately none of these things will make “The Long Night” look any better until HBO releases a new version of it. Those ugly bands and artifacts are baked right in. But if you have to blame anyone, blame the streaming infrastructure that wasn’t prepared for a show taking risks in its presentation, risks I would characterize as bold and well executed, unlike the writing in the show lately. Oops, sorry, couldn’t help myself. If you really want to experience this show the way it was intended, the fanciest TV in the world wouldn’t have helped last night, though when the Blu-ray comes out you’ll be in for a treat. But here’s hoping the next big battle takes place in broad daylight.
Is this the vertical-folding Motorola Razr?

Is this the vertical-folding Motorola Razr?

10:02am, 29th April, 2019
This could be the upcoming Razr revival. The appeared online on Weibo and show a foldable design. Unlike Galaxy Fold, though, Motorola’s implementation has the phone folding vertical — much like the original Razr. This design offers a more compelling use case than other foldables. Instead of traditional smartphone unfolding to a tablet-like display, Motorola’s design has a smaller device unfolding to a smartphone display. The result is a smaller phone turning into a normal phone. Pricing is still unclear but the WSJ previously stated it would carry a $1,500 cost when it’s eventually released. If it’s released. Samsung was the first to market with the Galaxy Fold. Kind of. A few journalists were given Galaxy Fold units ahead of its launch, but a handful of units failed in the first days. Samsung quickly postponed the launch and recalled all the review units. Despite this leak, Motorola has yet to confirm when this device will hit the market. , it will likely be extra cautious before launching it to the general public.
Kiwi’s food delivery bots are rolling out to 12 more colleges

Kiwi’s food delivery bots are rolling out to 12 more colleges

3:09pm, 25th April, 2019
If you’re a student at UC Berkeley, are probably a familiar sight by now, trundling along with a burrito inside to deliver to a dorm or apartment building. Now students at a dozen more campuses will be able to join this great, lazy future of robotic delivery as Kiwi expands to them with a clever student-run model. Speaking recently at at the Berkeley campus, Kiwi’s Felipe Chavez and Sasha Iatsenia discussed the success of their burgeoning business and the way they planned to take it national. In case you’re not aware of the Kiwi model, it’s basically this: When you place an order online with a participating restaurant, you have the option of delivery via Kiwi. If you so choose, one of the company’s fleet of knee-high robots with insulated, locking storage compartments will swing by the place, your order is put within, and it brings it to your front door (or as close as it can reasonably get). You can even watch the last bit live from the robot’s perspective as it rolls up to your place. The robots are what Kiwi calls “semi-autonomous.” This means that although they can navigate most sidewalks and avoid pedestrians, each has a human monitoring it and setting waypoints for it to follow, on average every five seconds. Iatsenia told me that they’d tried going full autonomous and that it worked… most of the time. But most of the time isn’t good enough for a commercial service, so they’ve got humans in the loop. They’re working on improving autonomy, but for now this is how it is. That the robots are being controlled in some fashion by a team of people in Colombia (from where the co-founders hail) does take a considerable amount of the futurism out of this endeavor, but on reflection it’s kind of a natural evolution of the existing delivery infrastructure. After all, someone has to drive the car that brings you your food, as well. And in reality, most AI is operated or informed directly or indirectly by actual people. That those drivers are in South America operating multiple vehicles at a time is a technological advance over your average delivery vehicle — though it must be said that there is an unsavory air of offshoring labor to save money on wages. That said, few people shed tears over the wages earned by the Chinese assemblers who put together our smartphones and laptops, or the garbage pickers who separate your poorly sorted recycling. The global labor economy is a complicated one, and the company is making jobs in the place it was at least partly born. Whatever the method, Kiwi has traction: it’s done more than 50,000 deliveries and the model seems to have proven itself. Customers are happy, they get stuff delivered more than ever once they get the app and there are fewer and fewer incidents where a robot is kicked over or, you know, . Notably, the founders said onstage, the community has really adopted the little vehicles, and should one overturn or be otherwise interfered with, it’s often set on its way soon after by a passerby. Iatsenia and Chavez think the model is ready to push out to other campuses, where a similar effort will have to take place — but rather than do it themselves by raising millions and hiring staff all over the country, they’re trusting the robotics-loving student groups at other universities to help out. For a small and low-cash startup like Kiwi, it would be risky to overextend by taking on a major round and using that to scale up. They started as robotics enthusiasts looking to bring something like this to their campus, so why can’t they help others do the same? So the team looked at dozens of universities, narrowing them down by factors important to robotic delivery: layout, density, commercial corridors, demographics and so on. Ultimately they arrived at the following list: Northern Illinois University University of Oklahoma Purdue University Texas A&M Parsons Cornell East Tennessee State University University of Nebraska-Lincoln Stanford Harvard NYU Rutgers What they’re doing is reaching out to robotics clubs and student groups at those colleges to see who wants to take partial ownership of Kiwi administration out there. Maintenance and deployment would still be handled by Berkeley students, but the student clubs would go through a certification process and then do the local work, like a capsized bot and on-site issues with customers and restaurants. “We are exploring several options to work with students down the road, including rev share,” Iatsenia told me. “It depends on the campus.” So far they’ve sent 40 robots to the 12 campuses listed and will be rolling out operations as the programs move forward on their own time. If you’re not one of the unis listed, don’t worry — if this goes the way Kiwi plans, it sounds like you can expect further expansion soon.
Kiwi’s food delivery bots are rolling out to 12 new colleges

Kiwi’s food delivery bots are rolling out to 12 new colleges

1:00pm, 25th April, 2019
If you’re a student at UC Berkeley, are probably a familiar sight by now, trundling along with a burrito inside to deliver to a dorm or apartment building. Now students at a dozen more campuses will be able to join this great, lazy future of robotic delivery as Kiwi expands to them with a clever student-run model. Speaking at at the Berkeley campus, Kiwi’s Felipe Chavez and Sasha Iatsenia discussed the success of their burgeoning business and the way they planned to take it national. In case you’re not aware of the Kiwi model, it’s basically this: When you place an order online with a participating restaurant, you have the option of delivery via Kiwi. If you so choose, one of the company’s fleet of knee-high robots with insulated, locking storage compartments will swing by the place, your order is put within, and it brings it to your front door (or as close as it can reasonably get). You can even watch the last bit live from the robot’s perspective as it rolls up to your place. The robots are what Kiwi calls “semi-autonomous.” This means that although they can navigate most sidewalks and avoid pedestrians, each has a human monitoring it and setting waypoints for it to follow, on average every five seconds. Iatsenia told me that they’d tried going full autonomous and that it worked… most of the time. But most of the time isn’t good enough for a commercial service, so they’ve got humans in the loop. They’re working on improving autonomy but for now this is how it is. That the robots are being controlled in some fashion by a team of people in Colombia (where the co-founders hail from) does take a considerable amount of the futurism out of this endeavor, but on reflection it’s kind of a natural evolution of the existing delivery infrastructure. After all, someone has to drive the car that brings you your food as well. And in reality most AI is operated or informed directly or indirectly by actual people. That those drivers are in South America operating multiple vehicles at a time is a technological advance over your average delivery vehicle — though it must be said that there is an unsavory air of offshoring labor to save money on wages. That said, few people shed tears over the wages earned by the Chinese assemblers who put together our smartphones and laptops, or the garbage pickers who separate your poorly sorted recycling. The global labor economy is a complicated one, and the company is making jobs in the place it was at least partly born. Whatever the method, Kiwi has traction: it’s done more than 50,000 deliveries and the model seems to have proven itself. Customers are happy, they get stuff delivered more than ever once they get the app, and there are fewer and fewer incidents where a robot is kicked over or, you know, . Notably, the founders said on stage, the community has really adopted the little vehicles, and should one overturn or be otherwise interfered with, it’s often set on its way soon after by a passerby. Iatsenia and Chavez think the model is ready to push out to other campuses, where a similar effort will have to take place — but rather than do it themselves by raising millions and hiring staff all over the country, they’re trusting the robotics-loving student groups at other universities to help out. For a small and low-cash startup like Kiwi, it would be risky to overextend by taking on a major round and using that to scale up. They started as robotics enthusiasts looking to bring something like this to their campus, so why can’t they help others do the same? So the team looked at dozens of universities, narrowing them down by factors important to robotic delivery: layout, density, commercial corridors, demographics, and so on. Ultimately they arrived at the following list: Northern Illinois University University of Oklahoma Purdue University Texas A&M Parsons Cornell East Tennessee State University Nebraska University-Lincoln Stanford Harvard NYU Rutgers What they’re doing is reaching out to robotics clubs and student groups at those colleges to see who wants to take partial ownership of Kiwi administration out there. Maintenance and deployment would still be handled by Berkeley students, but the student clubs would go through a certification process and then do the local work, like a capsized bot and on-site issues with customers and restaurants. “We are exploring several options to work with students down the road including rev share,” Iatsenia told me. “It depends on the campus.” So far they’ve sent out 40 robots to the 12 campuses listed and will be rolling out operations as the programs move forward on their own time. If you’re not one of the unis listed, don’t worry — if this goes the way Kiwi plans, it sounds like you can expect further expansion soon.
LEGO Braille bricks are the best, nicest and, in retrospect, most obvious idea ever

LEGO Braille bricks are the best, nicest and, in retrospect, most obvious idea ever

5:31pm, 24th April, 2019
Braille is a crucial skill to learn for children with visual impairments, and with these , kids can learn through hands-on play rather than more rigid methods like Braille readers and printouts. Given the naturally Braille-like structure of LEGO blocks, it’s surprising this wasn’t done decades ago. The truth is, however, that nothing can be obvious enough when it comes to marginalized populations like people with disabilities. But sometimes all it takes is someone in the right position to say “You know what? That’s a great idea and we’re just going to do it.” It happened with the (above). and it seems to have happened at LEGO. Stine Storm led the project, but Morten Bonde, who himself suffers from degenerating vision, helped guide the team with the passion and insight that only comes with personal experience. In some remarks sent over by LEGO, Bonde describes his drive to help: When I was contacted by the LEGO Foundation to function as internal consultant on the LEGO Braille Bricks project, and first met with Stine Storm, where she showed me the Braille bricks for the first time, I had a very emotional experience. While Stine talked about the project and the blind children she had visited and introduced to the LEGO Braille Bricks I got goose bumps all over the body. I just knew that I had to work on this project. I want to help all blind and visually impaired children in the world dare to dream and see that life has so much in store for them. When, some years ago, I was hit by stress and depression over my blind future, I decided one day that life is too precious for me not to enjoy every second of. I would like to help give blind children the desire to embark on challenges, learn to fail, learn to see life as a playground, where anything can come true if you yourself believe that they can come true. That is my greatest ambition with my participation in the LEGO Braille Bricks project The bricks themselves are very much like the originals, specifically the common 2×4 blocks, except they don’t have the full eight “studs” (so that’s what they’re called). Instead, they have the letters of the Braille alphabet, which happens to fit comfortably in a 2×3 array of studs, with room left on the bottom to put a visual indicator of the letter or symbol for sighted people. It’s compatible with ordinary LEGO bricks, and of course they can be stacked and attached to themselves, though not with quite the same versatility as an ordinary block, as some symbols will have fewer studs. You’ll probably want to keep them separate, since they’re more or less identical unless you inspect them individually. [gallery ids="1816767,1816769,1816776,1816772,1816768"] All told, the set, which will be provided for free to institutions serving vision-impaired students, will include about 250 pieces: A-Z (with regional variants), the numerals 0-9, basic operators like + and =, and some “inspiration for teaching and interactive games.” Perhaps some specialty pieces for word games and math toys, that sort of thing. LEGO was already one of the toys that can be enjoyed equally by sighted and vision-impaired children, but this adds a new layer, or I suppose just re-engineers an existing and proven one, to extend and specialize the decades-old toy for a group that already seems already to have taken to it: “The children’s level of engagement and their interest in being independent and included on equal terms in society is so evident. I am moved to see the impact this product has on developing blind and visually impaired children’s academic confidence and curiosity already in its infant days,” said Bonde. Danish, Norwegian, English and Portuguese blocks are being tested now, with German, Spanish and French on track for later this year. The kit should ship in 2020 — if you think your classroom could use these, get in touch with LEGO right away.
LEGO Braille bricks are the best, nicest, and in retrospect most obvious idea ever

LEGO Braille bricks are the best, nicest, and in retrospect most obvious idea ever

3:21pm, 24th April, 2019
Braille is a crucial skill for children with visual impairments to learn, and with these kids can learn through hands-on play rather than more rigid methods like Braille readers and printouts. Given the naturally Braille-like structure of LEGO blocks, it’s surprising this wasn’t done decades ago. The truth is, however, that nothing can be obvious enough when it comes to marginalized populations like people with disabilities. But sometimes all it takes is someone in the right position to say “You know what? That’s a great idea and we’re just going to do it.” It happened with the (above) and it seems to have happened at LEGO. Stine Storm led the project, but Morten Bonde, who himself suffers from degenerating vision, helped guide the team with the passion and insight that only comes with personal experience. In some remarks sent over by LEGO, Bonde describes his drive to help: When I was contacted by the LEGO Foundation to function as internal consultant on the LEGO Braille Bricks project, and first met with Stine Storm, where she showed me the Braille bricks for the first time, I had a very emotional experience. While Stine talked about the project and the blind children she had visited and introduced to the LEGO Braille Bricks I got goose bumps all over the body. I just knew that I had to work on this project. I want to help all blind and visually impaired children in the world dare to dream and see that life has so much in store for them. When, some years ago, I was hit by stress and depression over my blind future, I decided one day that life is too precious for me not to enjoy every second of. I would like to help give blind children the desire to embark on challenges, learn to fail, learn to see life as a playground, where anything can come true if you yourself believe that they can come true. That is my greatest ambition with my participation in the LEGO Braille Bricks project The bricks themselves are very like the originals, specifically the common 2×4 blocks, except they don’t have the full 8 “studs” (so that’s what they’re called). Instead, they have the letters of the Braille alphabet, which happens to fit comfortably in a 2×3 array of studs, with room left on the bottom to put a visual indicator of the letter or symbol for sighted people. It’s compatible with ordinary LEGO bricks and of course they can be stacked and attached themselves, though not with quite the same versatility as an ordinary block, since some symbols will have fewer studs. You’ll probably want to keep them separate, since they’re more or less identical unless you inspect them individually. [gallery ids="1816767,1816769,1816776,1816772,1816768"] All told the set, which will be provided for free to institutions serving vision-impaired students, will include about 250 pieces: A-Z (with regional variants), the numerals 0-9, basic operators like + and =, and some “inspiration for teaching and interactive games.” Perhaps some specialty pieces for word games and math toys, that sort of thing. LEGO was already one of the toys that can be enjoyed equally by sighted and vision-impaired children, but this adds a new layer, or I suppose just re-engineers an existing and proven one, to extend and specialize the decades-old toy for a group that already seems already to have taken to it: “The children’s level of engagement and their interest in being independent and included on equal terms in society is so evident. I am moved to see the impact this product has on developing blind and visually impaired children’s academic confidence and curiosity already in its infant days,” said Bonde. Danish, Norwegian, English, and Portuguese blocks are being tested now, with German, Spanish and French on track for later this year. The kit should ship in 2020 — if you think your classroom could use these, get in touch with LEGO right away.
Huawei’s P30 Pro excels on the camera front

Huawei’s P30 Pro excels on the camera front

1:11pm, 24th April, 2019
It’s been a month since Huawei its latest flagship device — the Huawei P30 Pro. I’ve played with the P30 and P30 Pro for a few weeks and I’ve been impressed with the camera system. The P30 Pro is the successor to and features improvements across the board. It could have been a truly remarkable phone, but some issues still hold it back compared to more traditional Android phones, such as the or . A flagship device The P30 Pro is by far the most premium device in the P line. It features a gigantic 6.47-inch OLED display, a small teardrop notch near the top, an integrated fingerprint sensor in the display and a lot of cameras. Before diving into the camera system, let’s talk about the overall feel of the device. Compared to last year’s P20 Pro, the company removed the fingerprint sensor at the bottom of the screen and made the notch smaller. The fingerprint sensor doesn’t perform as well as a dedicated fingerprint sensor, but it gets the job done. It has become hard to differentiate smartphones based on design as it looks a lot like the OnePlus 6T or the Samsung Galaxy S10. The display features a 19.5:9 aspect ratio with a 2340×1080 resolution, and it is curved around the edges. The result is a phone with gentle curves. The industrial design is less angular, even though the top and bottom edges of the device have been flattened. Huawei uses an aluminum frame and a glass with colorful gradients on the back of the device. Unfortunately, the curved display doesn’t work so well in practice. If you open an app with a unified white background, such as Gmail, you can see some odd-looking shadows near the edges. Below the surface, the P30 Pro uses a Kirin 980 system-on-a-chip. Huawei’s homemade chip performs well. To be honest, smartphones have been performing well for a few years now. It’s hard to complain about performance anymore. The phone features a headphone jack, a 40W USB-C charging port and an impressive 4,200 mAh battery. For the first time, Huawei added wireless charging to the P series (up to 15W). You can also charge another phone or an accessory with reverse wireless charging, just like on the Samsung Galaxy S10. Unfortunately, you have to manually activate the feature in the settings every time you want to use it. Huawei has also removed the speaker grill at the top of the display. The company now vibrates the screen in order to turn the screen into a tiny speaker for your calls. In my experience, it works well. While the phone ships with Android Pie, Huawei still puts a lot of software customization with its EMUI user interface. There are a dozen useless Huawei apps that probably make sense in China, but don’t necessarily need to be there if you use Google apps. For instance, the HiCare app keeps sending me notifications. The onboarding process is also quite confusing as some screens refer to Huawei features while others refer to standard Android features. It definitely won’t be a good experience for non tech-savvy people. (P30 Pro on the left, P30 on the right) Four cameras to rule them all The P20 Pro already had some great camera sensors and paved the way for night photos in recent Android devices. The P30 Pro camera system can be summed up in two words — more and better. The P30 Pro now features not one, not two, not three but f-o-u-r sensors on the back of the device. The main camera is a 40 MP 27mm sensor with an f/1.6 aperture and optical image stabilization. There’s a 20 MP ultra-wide angle lens (16mm) with an f/2.2 aperture. The 8 MP telephoto lens provides nearly 5x optical zoom compared to the main lens (125mm) with an f/3.4 aperture and optical image stabilization. There’s a new time-of-flight sensor below the flash of the P30 Pro. The phone projects infrared light and captures the reflection with this new sensor. It has become a sort of a meme already — yes, the zoom works incredibly well on the P30 Pro. In addition to packing a lot of megapixels in the main sensor, the company added a telephoto lens with a periscope design. The sensor features a mirror to beam the light at a right angle and put more layers of glass in the sensor without making the phone too thick. The company also combines the main camera sensor with the telephoto sensor to let you capture photos with a 10x zoom with a hybrid digital-optical zoom. Here’s a photo series with the wide angle lens, the normal lens, a 5x zoom and a 10x zoom: And it works incredibly well in daylight. Unfortunately, you won’t be able to use the telephoto lens at night as it doesn’t perform as well as the main camera. In addition to hardware improvements, Huawei has also worked on the algorithms that process your shots. Night mode performs incredibly well. You just have to hold your phone for 8 seconds so that it can capture as much light as possible. Here’s what it looks like in a completely dark room vs. an iPhone X: Huawei has also improved HDR processing and portrait photos. That new time-of-flight sensor works well when it comes to distinguishing a face from the background for instance. Once again, Huawei is a bit too heavy-handed with post-processing. If you use your camera with the Master AI setting, colors are too saturated. The grass appears much greener than it is in reality. Skin smoothing with the selfie camera still feels weird too. The phone also aggressively smoothes surfaces on dark shots. When you pick a smartphone brand, you also pick a certain photography style. I’m not a fan of saturated photos, so Huawei’s bias toward unnatural colors doesn’t work in my favor. But if you like extremely vivid shots with insanely good sensors the P30 Pro is for you. That array of lenses opens up a lot of possibilities and gives you more flexibility. Fine prints The P30 Pro isn’t available in the U.S. But the company has already covered the streets of major European cities with P30 Pro ads. It costs €999 ($1,130) for 128GB of storage — there are more expensive options with more storage. Huawei also unveiled a smaller device — the P30. It’s always interesting to look at the compromises of the more affordable model. On that front, there’s a lot to like about the P30. For €799 ($900) with 128GB, you get a solid phone. It has a 6.1-inch OLED display and shares a lot of specifications with its bigger version. The P30 features the same system-on-a-chip, the same teardrop notch, the same fingerprint sensor in the display, the same screen resolution. Surprisingly, the P30 Pro doesn’t have a headphone jack while the P30 has one. There are some things you won’t find on the P30, such as wireless charging or the curved display. While the edges of the device are slightly curved, the display itself is completely flat. And I think it looks better. Cameras are slightly worse on the P30, and you won’t be able to zoom in as aggressively. Here’s the full rundown: A 40 MP main sensor with an f/1.8 aperture and optical image stabilization. A 16 MP ultra-wide angle lens with an f/2.2 aperture. An 8 MP telephoto lens that should provide 3x optical zoom. No time-of-flight sensor. In the end, it really depends on what you’re looking for. The P30 Pro definitely has the best cameras of the P series. But the P30 is also an attractive phone for those looking for a smaller device. Huawei has once again pushed the limits of what you can pack in a smartphone when it comes to cameras. While iOS and Android are more mature than ever, it’s fascinating to see that hardware improvements are not slowing down.