Huawei announces smart glasses in partnership with Gentle Monster

Huawei announces smart glasses in partnership with Gentle Monster

10:05am, 26th March, 2019
Huawei is launching connected glasses in partnership with , a Korean sunglasses and optical glasses brand. There won’t be a single model, but a collection of glasses with integrated electronics. Huawei is positioning the glasses as a sort of earbuds replacement, a device that lets you talk on the phone without putting anything in your ears. There’s no button on the device, but you can tap the temple of the glasses to answer a call for instance. The antenna, charging module, dual microphone, chipset, speaker and battery are all integrated in the eyeglass temple. There are two microphones with beam-forming technology to understand what you’re saying even if the device is sitting on your nose. There are stereo speakers positioned right above your ears. The company wants you to hear sound without disturbing your neighbors. Interestingly, there’s no camera on the device. Huawei wants to avoid any privacy debate by skipping the camera altogether. Given that people have no issue with voice assistants and being surrounded by microphones, maybe people won’t be too suspicious. The glasses come in a leather case with USB-C port at the bottom. It features wireless charging as well. Huawei teased the glasses at in Paris, but the glasses won’t be available before July 2019.
Mobileye CEO clowns on Nvidia for allegedly copying self-driving car safety scheme

Mobileye CEO clowns on Nvidia for allegedly copying self-driving car safety scheme

4:45pm, 25th March, 2019
While creating self-driving car systems, it’s natural that different companies might independently arrive at similar methods or results — but the similarities in a recent “first of its kind” Nvidia proposal to work done by two years ago were just too much for the latter company’s CEO to take politely. Amnon Shashua, , openly mocks pointing out innumerable similarities to Mobileye’s “Responsibility Sensitive Safety” paper from 2017. He writes: It is clear Nvidia’s leaders have continued their pattern of imitation as their so-called “first-of-its-kind” safety concept is a close replica of the RSS model we published nearly two years ago. In our opinion, SFF is simply an inferior version of RSS dressed in green and black. To the extent there is any innovation there, it appears to be primarily of the linguistic variety. Now, it’s worth considering the idea that the approach both seem to take is, like many in the automotive and autonomous fields and others, simply inevitable. Car makers don’t go around accusing each other of using the similar setup of four wheels and two pedals. It’s partly for this reason, and partly because the safety model works better the more cars follow it, that when Mobileye published its RSS paper, it did so publicly and invited the industry to collaborate. Many did, and as Shashua points out, including Nvidia, at least for a short time in 2018, after which Nvidia pulled out of collaboration talks. To do so and then, a year afterwards, propose a system that is, if not identical, then at least remarkably similar, and without crediting or mentioning Mobileye is suspicious to say the least. The (highly simplified) foundation of both is calculating a set of standard actions corresponding to laws and human behavior that plan safe maneuvers based on the car’s own physical parameters and those of nearby objects and actors. But the similarities extend beyond these basics, Shashua writes (emphasis his): RSS defines a safe longitudinal and a safe lateral distance around the vehicle. When those safe distances are compromised, we say that the vehicle is in a Dangerous Situation and must perform a Proper Response. The specific moment when the vehicle must perform the Proper Response is called the Danger Threshold. SFF defines identical concepts with slightly modified terminology. Safe longitudinal distance is instead called “the SFF in One Dimension;” safe lateral distance is described as “the SFF in Higher Dimensions.” Instead of Proper Response, SFF uses “Safety Procedure.” Instead of Dangerous Situation, SFF replaces it with “Unsafe Situation.” And, just to be complete, SFF also recognizes the existence of a Danger Threshold, instead calling it a “Critical Moment.” This is followed by numerous other close parallels, and just when you think it’s done, he includes showing dozens of other cases where Nvidia seems (it’s hard to tell in some cases if you’re not closely familiar with the subject matter) to have followed Mobileye and RSS’s example over and over again. Theoretical work like this isn’t really patentable, and patenting wouldn’t be wise anyway, since widespread adoption of the basic ideas is the most desirable outcome (as both papers emphasize). But it’s common for one R&D group to push in one direction and have others refine or create counter-approaches. You see it in computer vision, where for example Google boffins may publish their early and interesting work, which is picked up by FAIR or Uber and improved or added to in another paper 8 months later. So it really would have been fine for Nvidia to publicly say “Mobileye proposed some stuff, that’s great but here’s our superior approach.” Instead there is no mention of RSS at all, which is strange considering their similarity, and the only citation in the SFF whitepaper is “The Safety Force Field, Nvidia, 2017,” in which, we are informed on the very first line, “the precise math is detailed.” Just one problem: This paper doesn’t seem to exist anywhere. It certainly was never published publicly in any journal or blog post by the company. It has no DOI number and doesn’t show up in any searches or article archives. This appears to be the first time anyone has ever cited it. It’s not required for rival companies to be civil with each other all the time, but in the research world this will almost certainly be considered poor form by Nvidia, and that can have knock-on effects when it comes to recruiting and overall credibility. I’ve contacted Nvidia for comment (and to ask for a copy of this mysterious paper). I’ll update this post if I hear back.
Apple introduces its own credit card, the Apple Card

Apple introduces its own credit card, the Apple Card

2:35pm, 25th March, 2019
Today, announced… a credit card. The Apple Card is designed for the iPhone and will work with the Wallet app. You sign up from your iPhone and you can use it with Apple Pay in just a few minutes. Before introducing the card, Apple CEO Tim Cook shared a few numbers about Apple Pay. This year, Apple Pay will reach 10 billion transactions. By the end of this year, Apple Pay will be available in more than 40 countries. Retail acceptance of Apple Pay is always growing. In the U.S., 70 percent of businesses accept Apple Pay. But it’s higher in some countries — Australia is at 99 percent acceptance, for instance. But let’s talk about the Apple Card. After signing up, you control the Apple Card from the Wallet app. When you tap on the card, you can see your last transactions, how much you owe and how much money you spent on each category. You can tap on a transaction and see the location in a tiny Apple Maps view. Every time you make an Apple Pay transaction, you get 2 percent in cash back. You don’t have to wait until the end of the month, as your cash is credited every day. For Apple purchases, you get 3 percent back. As previously rumored, Apple has partnered with Goldman Sachs and Mastercard to issue the card. Apple doesn’t know what you bought, where you bought it and how much you paid for it. And Goldman Sachs promises that it won’t sell your data for advertising or marketing. When it comes to the fine print, there are no late fees, no annual fees, no international fees and no over-limit fees. If you can’t pay back your credit card balance, you can start a multi-month plan — Apple tries to clearly define the terms of the plan. You can contact customer support through text messages in the Messages app. The Apple Card isn’t limited to a virtual card. You get a physical titanium card with a laser-etched name. There’s no card number, no CVV code, no expiration date and no signature on the card. You have to use the Wallet app to get that information. Physical transactions are eligible to 1 percent in daily cash. When it comes to security, you’ll get a different credit card number for each of your devices. It is stored securely and you can access the PIN code using Face ID or your fingerprint. Find more details on security in . The card will be available this summer for customers in the U.S.
How to watch the live stream for today’s Apple keynote

How to watch the live stream for today’s Apple keynote

3:45am, 25th March, 2019
is holding a keynote today on its campus in Cupertino, and the company is expected to talk about new services. Don’t expect any new device, today’s event should be all about content. At 10 AM PT (1 PM in New York, 5 PM in London, 6 PM in Paris), you’ll be able to watch the event as the company is streaming it live. Rumor has it that the company plans to multiple new services. The most anticipated one will be a new video streaming service that should compete with Netflix, Amazon Prime Video and others. In addition to that service, Apple will unveil an Apple News subscription to access magazines and premium articles for a flat monthly fee. But we might also hear about a mysterious and a subscription service. Details are still thin, so it’s going to be interesting to hear Apple talk about all those services. If you have an Apple TV, you can download the Apple Events app in the App Store. It lets you stream today’s event and rewatch old ones. The app icon was updated a few days ago for the event. And if you don’t have an Apple TV, the company also lets you live-stream the event from the on its website. This video feed now works in all major browsers — Safari, Microsoft Edge, Google Chrome and Mozilla Firefox. So to recap, here’s how you can watch today’s Apple event: Your favorite web browser on the Mac or Windows 10. An Apple TV with the Apple Events app in the App Store. Google Chrome on your Android phone. Of course, you also can read if you’re stuck at work and really need our entertaining commentary track to help you get through your day. We have a team in the room.
Apple could charge $9.99 per month each for HBO, Showtime and Starz

Apple could charge $9.99 per month each for HBO, Showtime and Starz

2:47pm, 24th March, 2019
The Wall Street Journal has published on Apple’s media push. The company is about to a new video streaming service and an Apple News subscription on Monday. According to The WSJ, you’ll be able to subscribe to multiple content packages to increase the video library in a new app called TV — it’s unclear if this app is going to replace the existing Apple TV app. The service would work more or less like . Users will be able to subscribe to HBO, Showtime or Starz for a monthly fee. The WSJ says that these three partners would charge $9.99 per month each. According to a previous report from , it differs from the existing Apple TV app as you won’t be redirected to another app. Everything will be available within a single app. Controlling the experience from start to finish would be a great advantage for users. As many people now suffer from subscription fatigue, Apple would be able to centralize all your content subscriptions in a single app. You could tick and untick options depending on your needs. But some companies probably don’t want to partner with Apple. It’s highly unlikely that you’ll find Netflix or Amazon Prime Video content in the Apple TV app. Those services also want to control the experience from start to finish. It’s also easier to gather data analytics when subscribers are using your own app. Apple should open up the Apple TV app to other platforms. Just like you can play music on Apple Music on Android, a Sonos speaker or an Amazon Echo speaker, Apple is working on apps for smart TVs. The company has already launched iTunes Store apps on Samsung TVs, so it wouldn’t be a big surprise. The company has also spent a ton of money on original content for its own service. Details are still thin on this front. Many of those shows for Monday. Do you have to pay to access Apple’s content too? How much? We’ll find out on Monday. When it comes to Apple News, The WSJ says that content from 200 magazines and newspapers will be available for $9.99 per month. The Wall Street Journal confirms that said that The Wall Street Journal was part of the subscription. Apple is also monitoring the App Store to detect popular apps according to multiple metrics, The WSJ says. Sure, Apple runs the App Store. But Facebook faced a public outcry when people realized that Facebook was with a VPN app called Onavo.
Apple could announce its gaming subscription service on Monday

Apple could announce its gaming subscription service on Monday

12:37pm, 24th March, 2019
is about to some new services on Monday. While everybody expects a video streaming service as well as a news subscription, a new report from Bloomberg says that the company might also mention its gaming subscription. Cheddar first back in January that Apple has been working on a gaming subscription. Users could pay a monthly subscription fee to access a library of games. We’re most likely talking about iOS games for the iPhone and iPad here. Games are the most popular category on the App Store, so it makes sense to turn this category into a subscription business. And yet, most of them are free-to-play, ad-supported games. Apple doesn’t necessarily want to target those games in particular. According to Bloomberg, the service will focus on paid games from third-party developers, such as Minecraft, NBA 2K games and the GTA franchise. Users would essentially pay to access this bundle of games. Apple would redistribute revenue to game developers based on how much time users spend within a game in particular. It’s still unclear whether Apple will announce the service or launch it on Monday. The gaming industry is more fragmented than the movie and TV industry, so it makes sense to talk about the service publicly even if it’s not ready just yet.
The damage of defaults

The damage of defaults

12:48pm, 23rd March, 2019
popped out a this week. The design looks exactly like the old pair of AirPods. Which means I’m never going to use them because Apple’s bulbous earbuds don’t fit my ears. Think square peg, round hole. The only way I could rock AirPods would be to walk around with hands clamped to the sides of my head to stop them from falling out. Which might make a nice cut in a glossy Apple ad for the gizmo — suggesting a feeling of closeness to the music, such that you can’t help but cup; a suggestive visual metaphor for the aural intimacy Apple surely wants its technology to communicate. But the reality of trying to use earbuds that don’t fit is not that at all. It’s just shit. They fall out at the slightest movement so you either sit and never turn your head or, yes, hold them in with your hands. Oh hai, hands-not-so-free-pods! The obvious point here is that one size does not fit all — howsoever much Apple’s Jony Ive and his softly spoken design team believe they have devised a universal earbud that pops snugly in every ear and just works. Sorry, nope! Hi , I fixed that sketch for you. Introducing — because one size doesn’t fit all — Natasha (@riptari) A proportion of iOS users — perhaps other petite women like me, or indeed men with less capacious ear holes — are simply being removed from Apple’s sales equation where earbuds are concerned. Apple is pretending we don’t exist. Sure we can just buy another brand of more appropriately sized earbuds. The in-ear, noise-canceling kind are my preference. Apple does not make ‘InPods’. But that’s not a huge deal. Well, not yet. It’s true, the consumer tech giant did also delete the headphone jack from iPhones. Thereby depreciating my existing pair of wired in-ear headphones ( to a 3.5mm-jack-less iPhone). But I could just shell out for Bluetooth wireless in-ear buds that fit my shell-like ears and carry on as normal. Universal in-ear headphones have existed for years, of course. A delightful design concept. You get a selection of different sized rubber caps shipped with the product and choose the size that best fits. Unfortunately Apple isn’t in the ‘InPods’ business though. Possibly for aesthetic reasons. Most likely because — and there’s more than a little irony here — an in-ear design wouldn’t be naturally roomy enough to fit Siri needs to, y’know, fake intelligence. Which means people like me with small ears are being passed over in favor of Apple’s voice assistant. So that’s AI: 1, non-‘standard’-sized human: 0. Which also, unsurprisingly, feels like shit. I say ‘yet’ because if voice computing does become the next major computing interaction paradigm, as some believe — given how Internet connectivity is set to get baked into everything (and sticking screens everywhere would be a visual and usability nightmare; albeit microphones everywhere is a privacy nightmare… ) — then the minority of humans with petite earholes will be at a disadvantage vs those who can just pop in their smart, sensor-packed earbud and get on with telling their Internet-enabled surroundings to do their bidding. Will parents of future generations of designer babies select for adequately capacious earholes so their child can pop an AI in? Let’s hope not. We’re also not at the voice computing singularity yet. Outside the usual tech bubbles it remains a bit of a novel gimmick. Amazon has drummed up some interest with in-home smart speakers housing its own voice AI Alexa (a brand choice that has, incidentally, caused a verbal headache for actual humans called Alexa). Though its Echo smart speakers appear to mostly get used as expensive weather checkers and egg timers. Or else for playing music — a function that a standard speaker or smartphone will happily perform. Certainly a voice AI is not something you need with you 24/7 yet. Prodding at a touchscreen remains the standard way of tapping into the power and convenience of mobile computing for the majority of consumers in developed markets. The thing is, though, it still grates to be ignored. To be told — even indirectly — by one of the world’s wealthiest consumer technology companies that it doesn’t believe your ears exist. Or, well, that it’s weighed up the sales calculations and decided it’s okay to drop a petite-holed minority on the cutting room floor. So that’s ‘ear meet AirPod’. Not ‘AirPod meet ear’ then. But the underlying issue is much bigger than Apple’s (in my case) oversized earbuds. Its latest shiny set of AirPods are just an ill-fitting reminder of how many technology defaults simply don’t ‘fit’ the world as claimed. Because if cash-rich Apple’s okay with promoting a universal default (that isn’t), think of all the less well resourced technology firms chasing scale for other single-sized, ill-fitting solutions. And all the problems flowing from attempts to mash ill-mapped technology onto society at large. When it comes to wrong-sized physical kit I’ve had similar issues with standard office computing equipment and furniture. Products that seems — surprise, surprise! — to have been default designed with a 6ft strapping guy in mind. Keyboards so long they end up gifting the smaller user RSI. Office chairs that deliver chronic back-pain as a service. Chunky mice that quickly wrack the hand with pain. (Apple is a there too I’m afraid.) The fixes for such ergonomic design failures is simply not to use the kit. To find a better-sized (often DIY) alternative that does ‘fit’. But a DIY fix may not be an option when discrepancy is embedded at the software level — and where a system is being applied to you, rather than you the human wanting to augment yourself with a bit of tech, such as a pair of smart earbuds. With software, embedded flaws and system design failures may also be harder to spot because it’s not necessarily immediately obvious there’s a problem. Oftentimes algorithmic bias isn’t visible until damage has been done. And there’s no shortage of stories already about how software defaults configured for a biased median have ended up causing real-world harm. (See for example: — software it found incorrectly judging black defendants more likely to offend than white. So software amplifying existing racial prejudice.) Of course AI makes this problem so much worse. Which is why the emphasis must be on catching bias in the datasets — before there is a chance for prejudice or bias to be ‘systematized’ and get baked into algorithms that can do damage at scale. The algorithms must also be explainable. And outcomes auditable. Transparency as disinfectant; not secret blackboxes stuffed with unknowable code. Doing all this requires huge up-front thought and effort on system design, and an even bigger change of attitude. It also needs massive, massive attention to diversity. An industry-wide championing of humanity’s multifaceted and multi-sized reality — and to making sure that’s reflected in both data and design choices (and therefore the teams doing the design and dev work). You could say what’s needed is a recognition there’s never, ever a one-sized-fits all plug. Indeed, that all algorithmic ‘solutions’ are abstractions that make compromises on accuracy and utility. And that those trade-offs can become viciously cutting knives that exclude, deny, disadvantage, delete and damage people at scale. Expensive earbuds that won’t stay put is just a handy visual metaphor. And while discussion about the risks and challenges of algorithmic bias has stepped up in recent years, as AI technologies have proliferated — with mainstream tech conferences actively debating how to “democratize AI” and bake diversity and ethics into system design via a development focus on principles like transparency, explainability, accountability and fairness — the industry has not even begun to fix its diversity problem. It’s barely moved the needle on diversity. And its products continue to reflect that fundamental flaw. Stanford just launched their Institute for Human-Centered Artificial Intelligence () with great fanfare. The mission: "The creators and designers of AI must be broadly representative of humanity." 121 faculty members listed. Not a single faculty member is Black. — Chad Loder ❁ (@chadloder) Many — if not most — of the tech industry’s problems can be traced back to the fact that inadequately diverse teams are chasing scale while lacking the perspective to realize their system design is repurposing human harm as a de facto performance measure. (Although ‘lack of perspective’ is the charitable interpretation in certain cases; moral vacuum may be .) As WWW creator, Sir Tim Berners-Lee, has , system design is now society design. That means engineers, coders, AI technologists are all working at the frontline of ethics. The design choices they make have the potential to impact, influence and shape the lives of millions and even billions of people. And when you’re designing society a median mindset and limited perspective cannot ever be an acceptable foundation. It’s also a recipe for product failure down the line. The current backlash against big tech shows that the stakes and the damage are very real when poorly designed technologies get dumped thoughtlessly on people. Life is messy and complex. People won’t fit a platform that oversimplifies and overlooks. And if your excuse for scaling harm is ‘we just didn’t think of that’ you’ve failed at your job and should really be headed out the door. Because the consequences for being excluded by flawed system design are also scaling and stepping up as platforms proliferate and more life-impacting decisions get automated. Harm is being squared. Even as the underlying industry drum hasn’t skipped a beat in its prediction that everything will be digitized. Which means that horribly biased parole systems are just the tip of the ethical iceberg. Think of healthcare, social welfare, law enforcement, education, recruitment, transportation, construction, urban environments, farming, the military, the list of what will be digitized — and of manual or human overseen processes that will get systematized and automated — goes on. Software — runs the industry mantra — is eating the world. That means badly designed technology products will harm more and more people. But responsibility for sociotechnical misfit can’t just be scaled away as so much ‘collateral damage’. So while an ‘elite’ design team led by a famous white guy might be able to craft a pleasingly curved earbud, such an approach cannot and does not automagically translate into AirPods with perfect, universal fit. It’s someone’s standard. It’s certainly not mine. We can posit that a more diverse Apple design team might have been able to rethink the AirPod design so as not to exclude those with smaller ears. Or make a case to convince the powers that be in Cupertino to add another size choice. We can but speculate. What’s clear is the future of technology design can’t be so stubborn. It must be radically inclusive and incredibly sensitive. Human-centric. Not locked to damaging defaults in its haste to impose a limited set of ideas. Above all, it needs a listening ear on the world. Indifference to difference and a blindspot for diversity will find no future here.
Gates-backed Lumotive upends lidar conventions using metamaterials

Gates-backed Lumotive upends lidar conventions using metamaterials

3:09pm, 22nd March, 2019
Pretty much every self-driving car on the road, not to mention many a robot and drone, uses lidar to sense its surroundings. But useful as lidar is, it also involves physical compromises that limit its capabilities. is a new company with funding from Bill Gates and Intellectual Ventures that uses metamaterials to exceed those limits, perhaps setting a new standard for the industry. The company is just now coming out of stealth, but it’s been in the works for a long time. I actually met with them back in 2017 when the project was very hush-hush and operating under a different name at IV’s startup incubator. If the terms “metamaterials” and “ tickle something in your brain, it’s because the company has spawned several startups that use intellectual property developed there, building on the work of materials scientist David Smith. Metamaterials are essentially specially engineered surfaces with microscopic structures — in this case, tunable antennas — embedded in them, working as a single device. Echodyne is another company that used metamaterials to great effect, shrinking radar arrays to pocket size by engineering a radar transceiver that’s essentially 2D and can have its beam steered electronically rather than mechanically. The principle works for pretty much any wavelength of electromagnetic radiation — i.e. you could use X-rays instead of radio waves — but until now no one has made it work with visible light. That’s Lumotive’s advance, and the reason it works so well. Flash, 2D, and 1D lidar Lidar basically works by bouncing light off the environment and measuring how and when it returns; This can be accomplished in several ways. Flash lidar basically sends out a pulse that illuminates the whole scene with near-infrared light (905 nanometers, most likely) at once. This provides a quick measurement of the whole scene, but limited distance as the power of the light being emitted is limited. 2D or raster scan lidar takes a NIR laser and plays it over the scene incredibly quickly, left to right, down a bit, then do it again, again, and again… scores or hundreds of times. Focusing the power into a beam gives these systems excellent range, but similar to a CRT TV with an electron beam tracing out the image, it takes rather a long time to complete the whole scene. Turnaround time is naturally of major importance in driving situations. 1D or line scan lidar strikes a balance between the two, using a vertical line of laser light that only has to go from one side to the other to complete the scene. This sacrifices some range and resolution but significantly improves responsiveness. Lumotive offered the following diagram, which helps visualize the systems, although obviously “suitability” and “too short” and “too slow” are somewhat subjective: The main problem with the latter two is that they rely on a mechanical platform to actually move the laser emitter or mirror from place to place. It works fine for the most part, but there are inherent limitations. For instance, it’s difficult to stop, slow, or reverse a beam that’s being moved by a high speed mechanism. If your 2D lidar system sweeps over something that could be worth further inspection, it has to go through the rest of its motions before coming back to it… over and over. This is the primary advantage offered by a metamaterial system over existing ones: electronic beam steering. In Echodyne’s case the radar could quickly sweep over its whole range like normal, and upon detecting an object could immediately switch over and focus 90 percent of its cycles tracking it in higher spatial and temporal resolution. The same thing is now possible with lidar. Imagine a deer jumping out around a blind curve. Every millisecond counts because the earlier a self-driving system knows the situation, the more options it has to accommodate it. All other things being equal, an electronically-steered lidar system would detect the deer at the same time as the mechanically-steered ones, or perhaps a bit sooner; Upon noticing this movement, could not just make more time for evaluating it on the next “pass,” but a microsecond later be backing up the beam and specifically targeting just the deer with the majority of its resolution. Just for illustration. The beam isn’t some big red thing that comes out. Targeted illumination would also improve the estimation of direction and speed, further improving the driving system’s knowledge and options — meanwhile the beam can still dedicate a portion of its cycles to watching the road, requiring no complicated mechanical hijinks to do so. Meanwhile it has an enormous aperture, allowing high sensitivity. In terms of specs, it depends on many things, but if the beam is just sweeping normally across its 120×25 degree field of view, the standard unit will have about a 20Hz frame rate, with a 1000×256 resolution. That’s comparable to competitors, but keep in mind that the advantage is in the ability to change that field of view and frame rate on the fly. In the example of the deer, it may maintain a 20Hz refresh for the scene at large but concentrate more beam time on a 5×5 degree area, giving it a much faster rate. Meta doesn’t mean mega-expensive Naturally one would assume that such a system would be considerably more expensive than existing ones. Pricing is still a ways out — Lumotive just wanted to show that its tech exists for now — but this is far from exotic tech. The team told me in an interview that their engineering process was tricky specifically because they designed it for fabrication using existing methods. It’s silicon-based, meaning it can use cheap and ubiquitous 905nm lasers rather than the rarer 1550nm, and its fabrication isn’t much more complex than making an ordinary display panel. CTO and co-founder Gleb Akselrod explained: “Essentially it’s a reflective semiconductor chip, and on the surface we fabricate these tiny antennas to manipulate the light. It’s made using a standard semiconductor process, then we add liquid crystal, then the coating. It’s a lot like an LCD.” An additional bonus of the metamaterial basis is that it works the same regardless of the size or shape of the chip. While an inch-wide rectangular chip is best for automotive purposes, Akselrod said, they could just as easily make one a quarter the size for robots that don’t need the wider field of view, or an larger or custom-shape one for a specialty vehicle or aircraft. The details, as I said, are still being worked out. Lumotive has been working on this for years and decided it was time to just get the basic information out there. “We spend an inordinate amount of time explaining the technology to investors,” noted CEO and co-founder Bill Colleran. He, it should be noted, is a veteran innovator in this field, having headed Impinj most recently, and before that was at Broadcom, but is perhaps he is best known for being CEO of Innovent when it created the first CMOS Bluetooth chip. Right now the company is seeking investment after running on a 2017 seed round funded by and IV, which (as with other metamaterial-based startups it has spun out) is granting Lumotive an exclusive license to the tech. There are partnerships and other things in the offing but the company wasn’t ready to talk about them; the product is currently in prototype but very showable form for the inevitable meetings with automotive and tech firms.
This is what the Huawei P30 will look like

This is what the Huawei P30 will look like

1:11pm, 21st March, 2019
You can already find many leaked photos of Huawei’s next flagship device — the P30 and P30 Pro. The company is set to announce the new product at an event in Paris next week. So here’s what you should expect. Reliable phone leaker Evan Blass many different photos of the new devices in three different tweets: — Evan Blass (@evleaks) As you can see, both devices feature three cameras on the back of the device. The notch is getting smaller and now looks like a teardrop. Compared to the P20 and P20 Pro, the fingerprint sensor is gone. It looks like Huawei is going to integrate the fingerprint sensor in the display just like Samsung did with the Samsung Galaxy S10. also shared some ads with some specifications. The P30 Pro will have a 10x hybrid zoom while the P30 will have a 5x hybrid zoom — it’s unclear how it’ll work to combine a hardware zoom with a software zoom. Huawei has been doing some good work on the camera front, so this is going to be a key part of next week’s presentation. For the first time, Huawei will put wireless charging in its flagship device — it’s about time. And it looks like the P30 Pro will adopt a curved display for the first time as well. I’ll be covering the event next week so stay tuned.
Apple announces new AirPods

Apple announces new AirPods

9:03am, 20th March, 2019
has just announced the second-generation AirPods. The new AirPods are fitted with the H1 chip, which is meant to offer performance efficiencies, faster connect times between the pods and your devices, and the ability to ask for Siri hands-free with the “Hey Siri” command. Because of its performance efficiency, the H1 chip also allows for the AirPods to offer 50 percent more talk time using the headphones. Switching between devices is 2x faster than the previous generation AirPods, according to Apple. Here’s what Phil Schiller had to say in the : AirPods delivered a magical wireless experience and have become one of the most beloved products we’ve ever made. They connect easily with all of your devices, and provide crystal clear sound and intuitive, innovative control of your music and audio. The world’s best wireless headphones just got even better with the new AirPods. They are powered by the new Apple-designed H1 chip which brings an extra hour of talk time, faster connections, hands-free ‘Hey Siri’ and the convenience of a new wireless battery case. The second-gen AirPods are available with the standard wired charging case ($159), or a new Wireless Charging Case ($199). A standalone wireless charging case is also available for purchase to $79. We’ve reached out to Apple to ask if the wireless case is backwards compatible with first-gen AirPods and will update the post once we know more. Update: Turns out, the wireless charging case works “with all generations of AirPods,” according to the . It appears that older models are still for sale, as well. Two times faster switching and 50 percent more talk time might sound like small perks for a relatively expensive accessory, but I’ve found that one of my biggest pain points with the AirPods are the lack of voice control and the time it takes to switch devices. The introduction of “Hey Siri” alongside the new H1 chip, as well as much faster switching between devices, should noticeably elevate the user experience with these new AirPods. After a search through the Apple website, it appears that old AirPods are no longer available for sale. The new Airpods are available to order today from Apple.com and the Apple Store app, with in-store availability beginning next week.
Markforged raises $82 million for its industrial 3D printers

Markforged raises $82 million for its industrial 3D printers

6:53am, 20th March, 2019
3D printer manufacturer has raised another round of funding. Summit Partners is leading the $82 million Series D round with Matrix Partners, Microsoft’s Venture Arm, Next47 and Porsche SE also participating. When you think about 3D printers, chances are you’re thinking about microwave-sized, plastic-focused 3D printers for hobbyists. is basically at the other end of the spectrum, focused on expensive 3D printers for industrial use cases. In addition to increased precision, Markforged can manufacture parts in strong , such as carbon fiber, kevlar or stainless steel. And it can greatly impacts your manufacturing process. For instance, you can prototype your next products with a Markforged printer. Instead of getting sample parts from third-party companies, you can manufacture your parts in house. If you’re not going to sell hundreds of thousands of products, you could even consider using Markforged to produce parts for your commercial products. If you work in an industry that requires a ton of different parts but don’t need a lot of inventory, you could also consider using a 3D printer to manufacture parts whenever you need them. Markforged has a full-stack approach and controls everything from the 3D printer, software and materials. Once you’re done designing your CAD 3D model, you can send it to your fleet of printers. The company’s application also lets you manage different versions of the same part and collaborate with other people. According to the company’s , Markforged has attracted 4,000 customers, such as Canon, Microsoft, Google, Amazon, General Motors, Volkswagen and Adidas. The company has shipped 2,500 printers in 2018. With today’s funding round, the company plans to do more of the same — you can expect mass production printers and more materials in the future. Eventually, Markforged wants to make it cheaper to manufacture parts at scale instead of producing those parts through other means.
Google is creating its own first-party game studio

Google is creating its own first-party game studio

1:33pm, 19th March, 2019
just unveiled at a conference in San Francisco, its cloud gaming platform. While most of the conference showcased well-known games you can play on your PC, Xbox One or Playstation 4, the company also announced that it is launching its own first-party game studio, Stadia Games and Entertainment. Jade Raymond is going to head the studio and was here to announce the first details. The company is going to work on exclusive games for Stadia. But the studio will have a bigger role than that. "I'm excited to announce that, as the head of Stadia Games and Entertainment, I will not only be bringing first party game studios to reimagine the next generation of games,” Raymond said. “Our team will also be working with external developers to bring all of the bleeding edge Google technology you have seen today available to partner studios big and small." Raymond has been working in the video game industry for more than 15 years. In particular, she was a producer for Ubisoft in Montreal during the early days of the Assassin’s Creed franchise. She also worked on Watch Dogs before leaving Ubisoft for Electronic Arts. She formed Motive Studios for Electronic Arts and worked with Visceral Games, another Electronic Arts game studio. She was working on an untitled single player Star Wars video game, but Visceral Games closed in 2017 and the project has been since then. According to Google, 100 studios around the world have already received development hardware for Stadia. There are over 1,000 engineers and creatives working on Stadia games or ports right now. Stadia uses and a Linux operating system. Games that are already compatible with Linux should be easy to port to Stadia. But there might be more work for studios focused on Windows games. According to , the cloud instance runs on Debian and features Vulkan. The machine runs an x86 CPU with a “custom AMD GPU with HBM2 memory and 56 compute units capable of 10.7 teraflops”. That sounds a lot like the AMD Radeon RX Vega 56, a relatively powerful GPU but something not as powerful as what you can find in high-end gaming PCs today. Google will be running a program called Stadia Partners to help third-party developers understand this new platform.
Intel and Cray are building a $500 million ‘exascale’ supercomputer for Argonne National Lab

Intel and Cray are building a $500 million ‘exascale’ supercomputer for Argonne National Lab

6:05pm, 18th March, 2019
In a way, I have the equivalent of a supercomputer in my pocket. But in another, more important way, that pocket computer is a joke compared with real supercomputers — and and Cray are putting together one of the biggest ever with a half-billion-dollar contract from the Department of Energy. The “Aurora” program aims to put together an “exascale” computing system for Argonne National Laboratory by 2021. The “exa” is prefix indicating bigness, in this case 1 quintillion floating point operations, or FLOPs. They’re kind of the horsepower rating of supercomputers. For comparison, your average modern CPU does maybe a hundred or more gigaflops. A thousand gigaflops makes a teraflop, a thousand teraflops makes a petaflop, and a thousand petaflops makes an exaflop. So despite major advances in computing efficiency going into making super powerful smartphones and desktops, we’re talking several orders of magnitude difference. (Let’s not get into GPUs, it’s complicated.) And even when compared with the biggest supercomputers and clusters out there, you’re still looking at a (that would be IBM’s Summit, over at Oak Ridge National Lab) or thereabouts. Just what do you need that kind of computing power for? Petaflops wouldn’t do it? Well, no, actually. One very recent example of computing limitations in real-world research was this study of how climate change could affect cloud formation in certain regions, reinforcing the trend and leading to a vicious cycle. This kind of thing could only be estimated with much coarser models before; Computing resources were too tight to allow for the kind of extremely large number of variables involved here (). Imagine simulating a ball bouncing on the ground — easy — now imagine simulating every molecule in that ball, their relationships to each other, gravity, air pressure, other forces — hard. Now imagine simulating two stars colliding. The more computing resources we have, the more can be dedicated to, as the Intel press release offers as examples, “developing extreme-scale cosmological simulations, discovering new approaches for drug response prediction and discovering materials for the creation of more efficient organic solar cells.” Intel says that Aurora will be the first exaflop system in the U.S. — an important caveat, since China is aiming to accomplish the task a year earlier. There’s no reason to think they won’t achieve it, either, since Chinese supercomputers have reliably been among the fastest in the world. If you’re curious what ANL may be putting its soon-to-be-built computers to work for, feel free to . The short answer is “just about everything.”
Apple launches new iPad Air and iPad mini

Apple launches new iPad Air and iPad mini

9:26am, 18th March, 2019
has refreshed its iPad lineup with . The company is (finally) updating the iPad mini and adding a new iPad Air. This model sits between the entry-level 9.7-inch iPad and the 11-inch iPad Pro in the lineup. All new models now support the Apple Pencil, but you might want to double check your iPad model before buying one. The new iPad models released today work with the first-gen Apple Pencil, not the new Apple Pencil that supports magnetic charging and pairing. So let’s look at those new iPads. First, the hasn’t been refreshed in three and a half years. Many people believed that Apple would simply drop the model as smartphones got bigger. But the iPad mini is making a surprise comeback. It looks identical to the previous 2015 model. But everything has been updated inside the device. It now features an A12 chip (the system on a chip designed for the iPhone XS), a 7.9-inch display that is 25 percent brighter, features a wider range of colors and works with True Tone. And it also works with the Apple Pencil. Unlike with the iPad Pro, the iPad mini still features a Touch ID fingerprint sensor, a Lightning port and a headphone jack. You can buy it today for $399 for 64GB. You can choose to pay more for 256GB of storage and cellular connectivity. It comes in silver, space gray and gold. Second, the iPad Air. While the name sounds familiar, this is a new device in the iPad lineup. When Apple introduced the new iPad Pro models back in October, Apple raised the prices on this segment of the market. This new iPad Air is a bit cheaper than the 11-inch iPad Pro and looks more or less like the previous generation 10.5-inch iPad Pro — I know it’s confusing. The iPad Air now features an A12 chip, which should represent a significant upgrade over the previous generation iPad Pro that featured an A10X. The iPad Air works with the Smart Keyboard. You can buy the device today for $499 with 64GB of storage. You can choose to pay more for 256GB of storage and cellular connectivity. It comes in silver, space gray and gold. The $329 iPad with a 9.7-inch display hasn’t been updated today. It still features an A10 chip, 64GB of storage and a display without True Tone technology or a wider range of colors. — Tim Cook (@tim_cook)
Valve lets you stream Steam games from anywhere

Valve lets you stream Steam games from anywhere

2:34pm, 14th March, 2019
Valve doesn’t want to miss the cloud gaming bandwagon. As PC Gamer , the company quietly a beta version of Steam Link Anywhere. As the name suggests, it lets you turn your gaming PC into a cloud gaming server and stream games from… anywhere. The company’s strategy is a bit puzzling here as Valve recently its hardware set-top box, the Steam Link. While Valve might be done on the hardware side, the company is still iterating on Link apps. You can now download the Steam Link app on an Android phone, an Android TV device or a Raspberry Pi. Unfortunately, Valve still hasn’t found a way to release its Steam Link app on the App Store for iOS devices and the Apple TV. You can start Steam on your computer and play demanding PC games on other screens. Steam Link works fine on a local network, especially if you use Ethernet cables between all your devices. With Steam Link Anywhere, your performance will vary depending on your home internet connection. If you don’t have a fiber connection at home, the latency might simply be too high to play any game. Now let’s see if Valve plans to flip the switch and let you run Steam games on a server in a data center near you. That would turn Steam Link Anywhere into a competitor. Microsoft recently Forza Horizon 4 running on an Android phone thanks to Project xCloud. Google also has been its Game Developers Conference to learn more about its gaming projects. It’s clear that everybody wants to turn 2019 into the year of cloud gaming.
Apple’s WWDC kicks off on June 3

Apple’s WWDC kicks off on June 3

12:25pm, 14th March, 2019
annual developer conference is returning to San Jose for the third year in a row, at the McEnery Convention Center. This year, will take place on June 3-7. As always, you should expect a keynote on the first day of the event with consumer-focused announcements. This year marks the 30th year anniversary of WWDC. You can now register on for $1,599 — the same price as in previous years. But buying a ticket doesn’t necessarily mean that you’ll get to attend the event. Apple will hold a lottery to select the lucky winners who get to pay to go to a developer conference. You have until March 20 at 5 p.m. to register. Developers will receive a notification on the next day if they’ve been selected. The selection process is a bit shorter than last year, so make sure you apply on time. And if you’re a student, you should consider applying for a . This year, 350 students will be able to attend the event for free through this process. In addition to some new announcements on the first day, Apple will hold many technical sessions and hands-on labs to help third-party developers in the Apple ecosystem at large. This conference is mostly aimed at developers working on apps for iOS, macOS, tvOS and watchOS. It’s a good way to understand how new frameworks are going to affect your apps and how you could take advantage of them.
Tiny claws let drones perch like birds and bats

Tiny claws let drones perch like birds and bats

9:15pm, 13th March, 2019
Drones are useful in countless ways, but that usefulness is often limited by the time they can stay in the air. Shouldn’t drones be able to take a load off too? With these special claws attached, they can perch or hang with ease, conserving battery power and vastly extending their flight time. The claws, created by a highly multinational team of researchers I’ll list at the end, are inspired by birds and bats. The team noted that many flying animals have specially adapted feet or claws suited to attaching the creature to its favored surface. Sometimes they sit, sometimes they hang, sometimes they just kind of lean on it and don’t have to flap as hard. As the researchers write: In all of these cases, some suitably shaped part of the animal’s foot interacts with a structure in the environment and facilitates that less lift needs to be generated or that power flight can be completely suspended. Our goal is to use the same concept, which is commonly referred to as “perching,” for UAVs [unmanned aerial vehicles]. “Perching,” you say? Go on… We designed a modularized and actuated landing gear framework for rotary-wing UAVs consisting of an actuated gripper module and a set of contact modules that are mounted on the gripper’s fingers. This modularization substantially increased the range of possible structures that can be exploited for perching and resting as compared with avian-inspired grippers. Instead of trying to build one complex mechanism, like a pair of articulating feet, the team gave the drones a set of specially shaped 3D-printed static modules and one big gripper. The drone surveys its surroundings using lidar or some other depth-aware sensor. This lets it characterize surfaces nearby and match those to a library of examples that it knows it can rest on. Squared-off edges like those on the top right can be rested on as in A, while a pole can be balanced on as in B. If the drone sees and needs to rest on a pole, it can grab it from above. If it’s a horizontal bar, it can grip it and hang below, flipping up again when necessary. If it’s a ledge, it can use a little cutout to steady itself against the corner, letting it shut off or all its motors. These modules can easily be swapped out or modified depending on the mission. I have to say the whole thing actually seems to work remarkably well for a prototype. The hard part appears to be the recognition of useful surfaces and the precise positioning required to land on them properly. But it’s useful enough — in professional and military applications especially, one suspects — that it seems likely to be a common feature in a few years. The paper describing this system was . I don’t want to leave anyone out, so it’s by: Kaiyu Hang, Ximin Lyu, Haoran Song, Johannes A. Stork , Aaron M. Dollar, Danica Kragic and Fu Zhang, from Yale, the Hong Kong University of Science and Technology, the University of Hong Kong, and the KTH Royal Institute of Technology.
Opportunity’s last Mars panorama is a showstopper

Opportunity’s last Mars panorama is a showstopper

2:45pm, 13th March, 2019
The Opportunity Mars Rover may be officially , but its — and NASA just shared the last (nearly) complete panorama the robot sent back before it was blanketed in dust. After more than 5,000 days (or rather sols) on the Martian surface, Opportunity found itself in Endeavour Crater, specifically in Perseverance Valley on the western rim. For the last month of its active life, it systematically imaged its surroundings to create another of its many impressive panoramas. Using the Pancam, which shoots sequentially through blue, green, and deep red (near-infrared) filters, it snapped 354 images of the area, capturing a broad variety of terrain as well as bits of itself and its tracks into the valley. You can click the image below for the full annotated version. It’s as perfect and diverse an example of the Martian landscape as one could hope for, and the false-color image (the flatter true-color version is ) has a special otherworldly beauty to it, which is only added to by the poignancy of this being the rover’s last shot. In fact, it didn’t even finish — a monochrome region in the lower left shows where it needed to add color next. This isn’t technically the last image the rover sent, though. As the fatal dust storm closed in, Opportunity sent one last thumbnail for an image that never went out: its last glimpse of the sun. After this the dust cloud so completely covered the sun that Opportunity was enveloped in pitch darkness, as its true last transmission showed: All the sparkles and dots are just noise from the image sensor. It would have been complete dark — and for weeks on end, considering the planetary scale of the storm. Opportunity had a hell of a good run, lasting and traveling many times what it was expected to and exceeding even the wildest hopes of the team. That right up until its final day it was capturing beautiful and valuable data is testament to the robustness and care with which it was engineered.
Apple’s streaming service could feature content from partners

Apple’s streaming service could feature content from partners

8:15am, 13th March, 2019
A report from shares some of the details about the long-rumored video streaming service from Apple. The company should unveil this service at a press conference in Cupertino on . While has been working on for its new streaming service, Bloomberg says that most of them won’t be ready for the launch later this month. Apple will probably share some teasers on stage, but the launch lineup will mostly feature third-party content. Apple is probably talking with everyone, but many premium cable channels still have to make up their mind about Apple’s streaming service. HBO, Showtime and Starz have to decide whether they want to be part of the launch by Friday. It’s unclear if Apple is going to feature some or all content from those partners. Many of them already have a streaming service on their own. And you can already access their libraries from the TV app on your Apple TV or iOS device. Apple could streamline the experience by letting you subscribe to various content bundles in its own streaming service. Amazon already provides something similar with . Netflix and Hulu will likely remain independent services as they compete directly with Apple’s original content effort. When it comes to Apple’s other announcement, the company should also unveil its Apple News subscription on March 25. Apple Texture last year and has been working on a digital magazine subscription for a while. Unsurprisingly, it looks like Apple News' magazine service is prepared to launch on macOS too — Steve Troughton-Smith (@stroughtonsmith) Once again, details are still thin for this new service when it comes to pricing, availability outside of the U.S. and content. Last month, the WSJ that Apple has been working with Goldman Sachs on a credit card that would integrate deeply with the Apple Wallet app. Given that Apple’s event is about services, let’s see if the company talks about this new product as well.
Microsoft shows off Project xCloud with Forza running on an Android phone

Microsoft shows off Project xCloud with Forza running on an Android phone

6:05am, 13th March, 2019
has shared some more and the first at . The company has been working on a cloud game streaming service for a while. Microsoft is preparing the future of gaming platforms with a device-agnostic service that lets you stream games made for the Xbox One. And the first demo is Forza Horizon 4 running in a data center and then streamed to an Android phone attached to an Xbox One controller via bluetooth. "Anywhere we have a good network connection, we'll be able to participate in Project xCloud,” Microsoft head of gaming cloud Kareem Choudhry said in the video. While Forza Horizon 4 is a demanding game and an Android phone is a tiny device, it won’t be limited to extreme scenarios like that. Choudhry compared Project xCloud to a music streaming or video streaming service. When you have a Spotify account, you can log in from any device, such as your phone, your computer or your work laptop, and find the same music library and your personal music playlists. You can imagine an Xbox-branded service that you could access from any device. Even if your computer has an integrated Intel GPU, you could log in and play a demanding game from that computer. Everything would run in a data center near you. It’s easy to see how Project xCloud would work with Microsoft’s existing gaming services. The company promises the same games with no extra work for developers. You’ll access your cloud saves, your friends and everything you’re already familiar with if you’re using an Xbox or the Xbox app on your PC. If you’ve bought an Xbox, an Xbox 360 and an Xbox One, there will be more Xbox consoles in the future. “It's not a replacement for consoles, we're not getting out of the console business,” Choudhry said. Other companies have been working on cloud gaming. French startup Blade has been working on , the most promising service currently available. Shadow lets you access a Windows 10 instance running in a data center. Microsoft wants to associate technology with content. The company already sells a subscription service. With the , you can play Xbox One and Xbox 360 games for $10 per month. Let’s see how Project xCloud and the Xbox Game Pass work together when Microsoft starts public trials later this year.