Apple could release an updated MacBook Air

Apple could release an updated MacBook Air

11:39am, 21st August, 2018
According to a report from , Apple has been working on multiple new Macs. In particular, could be planning to release a new entry-level laptop to replace the aging MacBook Air. This isn’t the about a MacBook Air refresh. While Apple has released a 12-inch retina MacBook, it’s not nearly as cheap as the MacBook Air. It’s also not as versatile as it only has a single USB Type-C port. And yet, the MacBook Air is arguably Apple’s most popular laptop design in recent years. Many MacBook Air users are still using their trusty device as there isn’t a clear replacement in the lineup right now. According to Bloomberg, the updated MacBook Air could get a retina display. Other details are still unclear. After Apple updated the MacBook Air in March 2015, the company neglected the laptop for a while. It received an update in June 2017, but it was such a minor update that it looked like the MacBook Air was . It sounds like neither the entry-level 13-inch MacBook Pro (the one without a Touch Bar) nor the 12-inch MacBook have fostered as much customer interest as the MacBook Air. Bloomberg also says that the Mac Mini is going to receive an update. The story of the Mac Mini is quite similar as the product has been neglected for years. Apple last updated the Mac Mini in October 2014 — it’s been nearly four years. And the fact that Apple still sells the Mac Mini from 2014 is embarrassing. You can find tiny desktop PCs that are cheaper, smaller and more powerful. They don’t run macOS, but that’s the only downside. It’s clear that laptops have taken over the computer market. Desktop computers have become a niche market. That’s why the updated Mac Mini could focus on people looking for a home server and who don’t want to mess around with a Raspberry Pi.
Safety and inspection bot startup Gecko Robotics adds $7 million to the coffers

Safety and inspection bot startup Gecko Robotics adds $7 million to the coffers

11:50am, 20th August, 2018
aims to save human lives at our nation’s power plants with its wall-climbing robots. To continue doing so, the startup tells TechCrunch it has just secured $7 million from a cadre of high-profile sources, including Founders Fund, Mark Cuban, The Westly Group, Justin Kan and Y Combinator. We reported on the Pittsburgh-based company when co-founder Jake Loosararian came to the TechCrunch TV studios to show off his device for the camera. Back then, Gecko was in the YC Spring 2016 cohort, working with several U.S. power plants and headed toward profitability, according to Loosararian. You can see the original interview below: The type of robots Gecko makes are an important part of ensuring safety in industrial and power plant facilities as they are able to go ahead of humans to check for potential hazards. The robots can climb tanks, boilers, pipelines and other industrial equipment using proprietary magnetic adhesion, ultra-sonics, lasers and a variety of sensors to inspect structural integrity, according to a company release. While not cheap — the robots run anywhere from $50,000 to $100,000 — they are also obviously a minuscule cost compared to human life. Gecko robot scaling the wall for a safety inspection at a power plant. Loosararian also mentioned his technology was faster and more accurate than what is out there at the moment by using machine learning “to solve some of the most difficult problems,” he told TechCrunch. It’s also a unique enough idea to get the attention from several seasoned investors. “There has been virtually no innovation in industrial services technology for decades,” Founders Fund partner Trae Stephens told TechCrunch in a statement. “Gecko’s robots massively reduce facility shutdown time while gathering critical performance data and preventing potentially fatal accidents. The demand for what they are building is huge.” Those interested can see the robots in action in the video below: from on
Watch Nvidia unveil the RTX 2080 live right here

Watch Nvidia unveil the RTX 2080 live right here

9:40am, 20th August, 2018
Nvidia is taking advantage of the in Germany to hold a press conference about its future graphics processing units. The conference will start at 6 PM in Germany, 12 PM in New York, 9 AM in San Francisco. Just a week after the company unveiled its new , Nvidia could share more details about the configurations and prices of its upcoming products — the RTX 2080, RTX 2080 Ti, etc. The name of the conference #BeForeTheGame suggests that Nvidia is going to focus on consumer products and in particular GPUs for gamers. While the GeForce GTX 1080 is still doing fine when it comes to playing demanding games, the company is always working on new generations to push the graphical boundaries of your computer. According to , you can expect two different products this afternoon. The GeForce RTX 2080 is going to feature 2,944 CUDA cores with 8GB of GDDR6. The GeForce RTX 2080 Ti could feature as many as 4,352 CUDA cores with 11GB of GDDR6. Nvidia already unveiled for professional workstations last week. The company is expecting significant performance improvements with this new generation as those GPUs are optimized for ray tracing — the “RT” in RTX stands for ray tracing. While ray tracing isn’t new, it’s hard to process images using this method with current hardware. The RTX GPUs will have dedicated hardware units for this task in particular. And maybe it’s going to become easier to buy GPUs now that the cryptocurrency mining craze is slowly fading away.
It’s Friday so relax and watch a hard drive defrag forever on Twitch

It’s Friday so relax and watch a hard drive defrag forever on Twitch

9:00pm, 17th August, 2018
It’s been a while since I defragged — years, probably, because these days for a number of reasons computers don’t really need to. But perhaps it is we who need to defrag. And what better way to defrag your brain after a long week than by watching the strangely satisfying defragmentation process taking place on a simulated DOS machine, complete with fan and HDD noise? , which has defrag.exe running 24/7 for your enjoyment. I didn’t realize how much I missed the sights and sounds of this particular process. I’ve always found ASCII visuals soothing, and there was something satisfying about watching all those little blocks get moved around to form a uniform whole. What were they doing down there on the lower right hand side of the hard drive anyway? That’s what I’d like to know. Afterwards I’d launch a state of the art game like Quake 2 just to convince myself it was loading faster. There’s also that nice purring noise that a hard drive would make (and which is recreated here). At least, I thought of it as purring. For the drive, it’s probably like being waterboarded. But I did always enjoy having the program running while keeping everything else quiet, perhaps as I was going to bed, so I could listen to its little clicks and whirrs. Sometimes it would hit a particularly snarled sector and really go to town, grinding like crazy. That’s how you knew it was working. The typo is, no doubt, deliberate. The whole thing is simulated, of course. There isn’t really just an endless pile of hard drives waiting to be defragged on decades-old hardware for our enjoyment (except in my box of old computer things). But the simulation is wonderfully complete, although if you think about it you probably never used DOS on a 16:9 monitor, and probably not at 1080p. It’s okay. We can sacrifice authenticity so we don’t have to windowbox it. The defragging will never stop at TwitchDefrags, and that’s comforting to me. It means I don’t have to build a 98SE rig and spend forever copying things around so I have a nicely fragmented volume. Honestly they should include this sound on those little white noise machines. For me this is definitely better than whale noises.
The Automatica automates pour-over coffee in a charming and totally unnecessary way

The Automatica automates pour-over coffee in a charming and totally unnecessary way

4:40pm, 17th August, 2018
Most mornings, after sifting through the night’s mail haul and skimming the headlines, I make myself a cup of coffee. I use a simple pour-over cone and paper filters, and (in what is perhaps my ), I grind the beans by hand. I like the manual aspect of it all. Which is why this robotic pour-over machine is to me so perverse… and so tempting. , this gadget, currently raising funds on Kickstarter but seemingly complete as far as development and testing, is basically a way to do pour-over coffee without holding the kettle yourself. You fill the kettle and place your mug and cone on the stand in front of it. The water is brought to a boil and the kettle tips automatically. Then the whole mug-and-cone portion spins slowly, distributing the water around the grounds, stopping after 11 ounces has been distributed over the correct duration. You can use whatever cone and mug you want as long as they’re about the right size. Of course, the whole point of pour-over coffee is that it’s simple: you can do it at home, while on vacation, while hiking or indeed at a coffee shop with a bare minimum of apparatus. All you need is the coffee beans, the cone, a paper filter — although some cones omit even that — and of course a receptacle for the product. (It’s not the simplest — that’d be Turkish, but that’s coffee for werewolves.) Why should anyone want to disturb this simplicity? Well, the same reason we have the other 20 methods for making coffee: convenience. And in truth, pour-over is already automated in the form of drip machines. So the obvious next question is, why this dog and pony show of an open-air coffee bot? Aesthetics! Nothing wrong with that. What goes on in the obscure darkness of a drip machine? No one knows. But this — this you can watch, audit, understand. Even if the machinery is complex, the result is simple: hot water swirls gently through the grounds. And although it’s fundamentally a bit absurd, it is a good-looking machine, with wood and brass accents and a tasteful kettle shape. (I do love a tasteful kettle.) The creators say the machine is built to last “generations,” a promise which must of course be taken with a grain of salt. Anything with electronics has the potential to short out, to develop a bug, to be troubled by humidity or water leaks. The heating element may fail. The motor might stutter or a hinge catch. But all that is true of most coffee machines, and unlike those, this one appears to be made with care and high-quality materials. The cracking and warping you can expect in thin molded plastic won’t happen to this thing, and if you take care of it, it should at least last several years. And it better, for the minimum pledge price that gets you a machine: $450. That’s quite a chunk of change. But like audiophiles, coffee people are kind of suckers for a nice piece of equipment. There is of course the standard crowdfunding caveat emptor; this isn’t a pre-order but a pledge to back this interesting hardware startup, and if it’s anything like the last five or six campaigns I’ve backed, it’ll arrive late after facing unforeseen difficulties with machining, molds, leaks and so on.
Tomu is a fingernail-sized computer that is easy to swallow

Tomu is a fingernail-sized computer that is easy to swallow

2:40pm, 16th August, 2018
I’m a huge fan of single board computers, especially if they’re small enough to swallow. That’s why I like the . This teeny-tiny ARM processor essentially interfaces with your computer via the USB port and contains two LEDs and two buttons. Once it’s plugged in the little computer can simulate a hard drive or mouse, send MIDI data, and even blink quickly. The Tomu runs the Silicon Labs Happy Gecko EFM32HG309 and can also act as a security token. It is completely open source and all the code is on their . I bought one for $30 and messed with it for a few hours. The programs are very simple and you can load in various tools including a clever little mouse mover – maybe to simulate mouse usage for an app – and a little app that blinks the lights quickly. Otherwise you can use it to turn your USB hub into an on-off switch for your computer. It’s definitely not a fully fledged computer – there are limited I/O options, obviously – but it’s a cute little tool for those who want to do a little open source computing. One problem? It’s really, really small. I’d do more work on mine but I already lost it while I was clearing off a desk so I could see it better. So it goes.
SNES.party lets you play Super Nintendo with your friends

SNES.party lets you play Super Nintendo with your friends

12:30pm, 16th August, 2018
Hot on the heels of the wonderful comes Haukur Rosinkranz’s , a site that lets you play Super Nintendo with all your buds. Rosinkranz is Icelandic but lives in Berlin now. He made NES.party a year ago while experimenting with WebRTC and WebSockets and he updated his software to support the SNES. “The reason I made it was simply because I discovered how advanced the RTC implementation in Chrome had become and wanted to do something with it,” he said. “When I discovered that it’s possible to take a video element and stream it over the network I just knew I had to do something cool with this and I came up with the idea of streaming emulators.” He said it took him six months to build the app and a month to add NES support. “It’s hard to say how long it took because I basically created my own framework for web applications that need realtime communication between one or more participants,” he said. He is a freelance programmer. It’s a clever hack that could add a little fun to your otherwise dismal day. Feel like a little Link to the Past? Pop over !
VR optics could help old folks keep the world in focus

VR optics could help old folks keep the world in focus

5:00pm, 15th August, 2018
The complex optics involved with putting a screen an inch away from the eye in VR headsets could make for smart glasses that correct for vision problems. These prototype “autofocals” from Stanford researchers use depth sensing and gaze tracking to bring the world into focus when someone lacks the ability to do it on their own. I talked with lead researcher Nitish Padmanaban at SIGGRAPH in Vancouver, where he and the others were showing off the latest version of the system. It’s meant, he explained, to be a better solution to the problem of presbyopia, which is basically when your eyes refuse to focus on close-up objects. It happens to millions of people as they age, even people with otherwise excellent vision. There are, of course, bifocals and progressive lenses that bend light in such a way as to bring such objects into focus — purely optical solutions, and cheap as well, but inflexible, and they only provide a small “viewport” through which to view the world. And there are adjustable-lens glasses as well, but must be adjusted slowly and manually with a dial on the side. What if you could make the whole lens change shape automatically, depending on the user’s need, in real time? , and although the current prototype is obviously far too bulky and limited for actual deployment, the concept seems totally sound. Padmanaban previously worked in VR, and mentioned what’s called the convergence-accommodation problem. Basically, the way what we see changes in real life when we move and refocus our eyes from far to near doesn’t happen properly (if at all) in VR and that can produce pain and nausea. Having lenses that automatically adjust based on where you’re looking would be useful there — and indeed some VR developers were showing off just that only ten feet away. But it could also apply to people who are unable to focus on nearby objects in the real world, Padmanaban thought. This is an old prototype, but you get the idea. It works like this. A depth sensor on the glasses collects a basic view of the scene in front of the person: a newspaper is 14 inches away, a table 3 feet away, the rest of the room considerably more. Then an eye-tracking system checks where the user is currently looking and cross-references that with the depth map. Having been equipped with the specifics of the user’s vision problem, for instance that they have trouble focusing on objects closer than 20 inches away, the apparatus can them make an intelligent decision as to whether and how to adjust the lenses of the glasses. In the case above, if the user was looking at the table or the rest of the room, the glasses will assume whatever normal correction the person requires to see — perhaps none. But if they change their gaze to focus on the paper, the glasses immediately adjust the lenses (perhaps independently per eye) to bring that object into focus in a way that doesn’t strain the person’s eyes. The whole process of checking the gaze, depth of the selected object, and adjustment of the lenses takes a total of about 150 milliseconds. That’s long enough that the user might notice it happens, but the whole process of redirecting and refocusing one’s gaze takes perhaps three or four times that long — so the changes in the device will be complete by the time the user’s eyes would normally be at rest again. “Even with an early prototype, the Autofocals are comparable to and sometimes better than traditional correction,” reads a short summary of the research published for SIGGRAPH. “Furthermore,the ‘natural’ operation of the Autofocals makes them usable on first wear.” The team is currently conducting tests to measure more quantitatively the improvements derived from this system, and test for any possible ill effects, glitches, or other complaints. They’re a long way from commercialization, but Padmanaban suggested that some manufacturers are already looking into this type of method and despite its early stage, it’s highly promising. We can expect to hear more from them when the full paper is published.
Fitbit’s upcoming Charge 3 to sport full touchscreen, per leak

Fitbit’s upcoming Charge 3 to sport full touchscreen, per leak

8:20am, 15th August, 2018
This appears to be the Fitbit Charge 3 and, if it is, several big changes are in the works for Fitbit’s premier fitness tracker band. The which points to the changes. First, the device has a full touchscreen rather than a clunky quasi-touchscreen like the Charge 2. From the touchscreen, users can navigate the device and even reply to notifications and messages. Second, the Charge 3 will be swim-proof to 50 meters. Finally, and this is a bad one, the Charge 3 will not have GPS built-in meaning users will have to bring a smartphone along for a run if they want GPS data. Price and availability was not reveled but chances are the device will hit the stores in the coming weeks ahead of the holidays. This is a big change for Fitbit. If the above leak is correct on all points, Fitbit is pushing the Charge 3 into smartwatch territory. The drop of GPS is regrettable but the company probably has data showing a minority of wearers use the feature. With a full touchscreen, and a notification reply function, the Charge 3 is gaining a lot of functionality for its size.
This robot maintains tender, unnerving eye contact

This robot maintains tender, unnerving eye contact

5:10pm, 14th August, 2018
Humans already find it unnerving enough when extremely alien-looking robots are kicked and interfered with, so one can only imagine how much worse it will be when they make unbroken eye contact and mirror your expressions while you heap abuse on them. This is the future we have selected. The Simulative Emotional Expression Robot, or SEER, was on display at SIGGRAPH here in Vancouver, and it’s definitely an experience. The robot, , is a small humanoid head and neck that responds to the nearest person by making eye contact and imitating their expression. It doesn’t sound like much, but it’s pretty complex to execute well, which, despite a few glitches, SEER managed to do. At present it alternates between two modes: imitative and eye contact. Both, of course, rely on a nearby (or, one can imagine, built-in) camera that recognizes and tracks the features of your face in real time. In imitative mode the positions of the viewer’s eyebrows and eyelids, and the position of their head, are mirrored by SEER. It’s not perfect — it occasionally freaks out or vibrates because of noisy face data — but when it worked it managed rather a good version of what I was giving it. Real humans are more expressive, naturally, but this little face with its creepily realistic eyes plunged deeply into the uncanny valley and nearly climbed the far side. Eye contact mode has the robot moving on its own while, as you might guess, making uninterrupted eye contact with whoever is nearest. It’s a bit creepy, but not in the way that some robots are — when you’re looked at by inadequately modeled faces, it just feels like bad VFX. In this case it was more the surprising amount of empathy you suddenly feel for this little machine. That’s largely due to the delicate, childlike, neutral sculpting of the face and highly realistic eyes. If an Amazon Echo had those eyes, you’d never forget it was listening to everything you say. You might even tell it your problems. This is just an art project for now, but the tech behind it is definitely the kind of thing you can expect to be integrated with virtual assistants and the like in the near future. Whether that’s a good thing or a bad one I guess we’ll find out together.
StarVR’s One headset flaunts eye-tracking and a double-wide field of view

StarVR’s One headset flaunts eye-tracking and a double-wide field of view

3:00pm, 14th August, 2018
While the field of VR headsets used to be more or less limited to Oculus and Vive, numerous competitors have sprung up as the technology has matured — and some are out to beat the market leaders at their own game. brings eye-tracking and a seriously expanded field of view to the game, and the latter especially is a treat to experience. The company announced the new hardware at SIGGRAPH in Vancouver, where I got to go hands-on and eyes-in with the headset. Before you get too excited, though, keep in mind this set is meant for commercial applications — car showrooms, aircraft simulators, and so on. What that means is it’s going to be expensive and not as polished a user experience as consumer-focused sets. That said, the improvements present in the StarVR One are significant and immediately obvious. Most important is probably the expanded FOV — 210 degrees horizontal and 130 vertical. That’s nearly twice as wide as the 110 degrees wide that the most popular headsets have, and believe me, it makes a difference. (I haven’t tried the Pimax 8K, which has a similarly wide FOV.) On Vive and Oculus sets I always had the feeling that I was looking through a hole into the VR world — a large hole, to be sure, but having your peripheral vision be essentially blank made it a bit claustrophobic. In the StarVR headset, I felt like the virtual environment was actually around me, not just in front of me. I moved my eyes around much more rather than turning my head, with no worries about accidentally gazing at the fuzzy edge of the display. A 90 Hz refresh rate meant things were nice and smooth. To throw shade at competitors, the demo I played (I was a giant cyber-ape defending a tower) could switch between the full FOV and a simulation of the 110-degree one found in other headsets. I suspect it was slightly exaggerated, but the difference really is clear. It’s reasonably light and comfortable — no VR headset is really either. But it doesn’t feel as chunky as it looks. The resolution of the custom AMOLED display is supposedly 5K. But the company declined to specify the actual resolution when I asked. They did, however, proudly proclaim full RGB pixels and 16 million sub-pixels. Let’s do the math: 16 million divided by 3 makes around 5.3 million full pixels. 5K isn’t a real standard, just shorthand for having around 5,000 horizontal pixels between the two displays. Divide 5.3 million by that and you get 1060. Rounding those off to semi-known numbers gives us 2560 pixels (per eye) for the horizontal and 1080 for the vertical resolution. That doesn’t fit the approximately 16:10 ratio of the field of view, but who knows? Let’s not get too bogged down in unknowns. Resolution isn’t everything — but generally, the more pixels the better. The other major new inclusion is an eye-tracking system provided by Tobii. We knew eye-tracking in VR was coming; it was demonstrated at CES, and the Fove Kickstarter showed it was at least conceivable to integrate into a headset now-ish. Unfortunately the demos of eye-tracking were pretty limited (think a heatmap of where you looked on a car) so, being hungry, I skipped them. The promise is good enough for now — eye tracking allows for all kinds of things, including a “foveated rendering” that focuses display power where you’re looking. This too was not being shown, however, and it strikes me that it is likely phenomenally difficult to pull off well — so it may be a while before we see a good demo of it. One small but welcome improvement that eye-tracking also enables is automatic detection of intrapupillary distance, or IPD — it’s different for everyone and can be important to rendering the image correctly. One less thing to worry about. The StarVR One is compatible with SteamVR tracking, or you can get the XT version and build your own optical tracking rig — that’s for the commercial providers for whom it’s an option. Although this headset will be going to high-end commercial types, you can bet that the wide FOV and eye tracking in it will be standard in the next generation of consumer devices. Having tried most of the other headsets, I can say with certainty that I wouldn’t want to go back to some of them after having experienced this one. VR is still a long way off from convincing me it’s worthwhile, but major improvements like these definitely help.
This bipedal robot has a flying head

This bipedal robot has a flying head

12:50pm, 14th August, 2018
Making a bipedal robot is hard. You have to make sure maintain exquisite balance at all times and, even with the amazing things can do, there is still a chance that your crazy robot will fall over and bop its electronic head. But what if that head is a quadcopter? University of Tokyo have done just that with their wild Aerial-Biped. The robot isn’t completely bipedal but it’s designed instead to act like a bipedal robot without the tricky issue of being truly bipedal. Think of the these legs as more a sort of fun bit of puppetry that mimics walking but doesn’t really walk. “The goal is to develop a robot that has the ability to display the appearance of bipedal walking with dynamic mobility, and to provide a new visual experience. The robot enables walking motion with very slender legs like those of a flamingo without impairing dynamic mobility. This approach enables casual users to choreograph biped robot walking without expertise. In addition, it is much cheaper compared to a conventional bipedal walking robot,” the team told . The robot is similar to the bizarre-looking and spindly legs. The new robot learned how to walk convincingly through machine learning, a feat that gives it a realistic gait even though it is really an aerial system. It’s definitely a clever little project and could be interesting at a theme park or in an environment where a massive bipedal robot falling over on someone might be discouraged.
This happy robot helps kids with autism

This happy robot helps kids with autism

3:12pm, 13th August, 2018
A little bot named QTrobot from LuxAI could be the link between therapists, parents, and autistic children. The robot, which features an LCD face and robotic arms, allows kids who are overwhelmed by human contact to become more comfortable in a therapeutic setting. The project comes from , a spin-off of the University of Luxembourg. They will present conference at the end of this month. “The robot has the ability to create a triangular interaction between the human therapist, the robot, and the child,” co-founder Aida Nazarikhorram told . “Immediately the child starts interacting with the educator or therapist to ask questions about the robot or give feedback about its behavior.” The robot reduces anxiety in autistic children and the researchers saw many behaviors – hand flapping, for example – slow down with the robot in the mix. Interestingly the robot is a better choice for children than an app or tablet. Because the robot is “embodied,” the researchers found that it that draws attention and improves learning, especially when compared to a standard iPad/educational app pairing. In other words children play with tablets and work with robots. The robot is entirely self-contained and easily programmable. It can run for hours at a time and includes a 3D camera and full processor. The researchers found that the robot doesn’t become the focus of the therapy but instead helps the therapist connect with the patient. This, obviously, is an excellent outcome for an excellent (and cute) little piece of technology.
Security researchers found a way to hack into the Amazon Echo

Security researchers found a way to hack into the Amazon Echo

10:52am, 13th August, 2018
Hackers at DefCon have exposed new security concerns around smart speakers. Tencent’s Wu HuiYu and Qian Wenxiang spoke at the security conference with a presentation called , explaining how they hacked into an Amazon Echo speaker and turned it into a spy bug. The hack involved a modified Echo, which had had parts swapped out, including some that had been soldered on. The modified Echo was then used to hack into other, non-modified Echos by connecting both the hackers’ Echo and a regular Echo to the same LAN. This allowed the hackers to turn their own, modified Echo into a listening bug, relaying audio from the other Echo speakers without those speakers indicating that they were transmitting. This method was very difficult to execute, but represents an early step in exploiting Amazon’s increasingly popular smart speaker. The researchers notified Amazon of the exploit before the presentation, and Amazon has already pushed a patch, according to . Still, the presentation demonstrates how one Echo, with malicious firmware, could potentially alter a group of speakers when connected to the same network, posing concerns with the idea of Echos in hotels. Wired explained how the networking feature of the Echo allowed for the hack: If they can then get that doctored Echo onto the same Wi-Fi network as a target device, the hackers can take advantage of a software component of Amazon’s speakers, known as Whole Home Audio Daemon, that the devices use to communicate with other Echoes in the same network. That daemon contained a vulnerability that the hackers found they could exploit via their hacked Echo to gain full control over the target speaker, including the ability to make the Echo play any sound they chose, or more worryingly, silently record and transmit audio to a faraway spy. An Amazon spokesperson told Wired that “customers do not need to take any action as their devices have been automatically updated with security fixes,” adding that “this issue would have required a malicious actor to have physical access to a device and the ability to modify the device hardware.” To be clear, the actor would only need physical access to their own Echo to execute the hack. While Amazon has dismissed concerns that its voice activated devices are monitoring you, hackers at this year’s DefCon proved that they can.
Samsung turns to Plume for new mesh Wi-Fi product line

Samsung turns to Plume for new mesh Wi-Fi product line

8:42am, 13th August, 2018
Samsung today is announcing an updated version of its Wifi product line. The company partnered with Palo Alto-based Plume Design to provide software that powers the devices. According to Samsung, Plume’s platform uses artificial intelligence to allocate bandwidth across connected devices while delivering the best possible wi-fi coverage throughout a home. Plus, by using Plume, Samsung gets to say its wi-fi system uses AI, which is a big marketing win. The system also includes a SmartThings Hub like the previous generation allowing owners to build a connected IoT home without having to buy another box. “Integrating our adaptive home Wi-Fi technology and a rich set of consumer features into SmartThings’ large, open ecosystem truly elevates the smart home experience,” said Fahri Diner, co-founder and CEO, Plume, said in a released statement. “Samsung gives you myriad devices to consume content and connect, and Plume ensures that your Wi-Fi network delivers a superior user experience to all of those devices.” Plume Design was founded in 2014 and was one of the first to offer a consumer-facing mesh network product line. Since then, though, nearly every home networking company has followed suit and Plume has been forced to find new ways to make use of its technology. In June 2017, invested in Plume and using Plume technology to power the mesh networking product. According to Comcast at the time of xFi’s nationwide launch, Comcast licensed the Plume technology, then reconfigured some aspects of it to integrate xFi. It also designed its own pods in-house — which sounds similar to what Samsung is doing here too. Plume Design has to date raised $42.2M over three rounds of funding. Samsung’s new SmartThings WiFi Mesh Router is priced competitively with comparable products. A three pack of the units cost $279 while a single unit is $119.
Understanding smartwatches

Understanding smartwatches

3:22pm, 12th August, 2018
I was wrong. Several years ago I reviewed the first Garmin Fenix 3 smartwatch. This was before the release of the Apple Watch. That’s key to this story. . The Apple Watch would be better in every way, I pointed out. Therefore, there would be little reason to buy the Fenix 3. But here I am, in the middle of the woods, wearing the fifth generation of the Garmin Fenix while my Apple Watch sits at home on my desk. In some ways I was right. The Apple Watch is better by most measurable attributes: there are more apps, the screen is superior, there’s a vibrant accessory market, and it’s thinner, faster and cheaper. The Garmin Fenix is big, clunky and the screen looks like it’s from a Kindle. It’s not a touchscreen nor does it have the number of apps or band options of the Apple Watch. I like it. To me, the Garmin Fenix is akin to a modern Casio G-Shock, and that’s what I want to wear right now. Smartwatches are often reviewed like phones or vacuums. Specs are compared, and conclusions are drawn. Wearability is talked about, and functions are tested. If the watch has a swimming option, take it in a pool never mind the fact the reviewer hasn’t done a lap since high school. I started out doing the same thing with this Garmin. I took it kayaking. I had kayaked twice in my life, and dear reader, I’m here to report the watch performed well on this kayak trip. The watch has topography maps that novel though not useful since the river. It has a cadence beat to help keep strokes consistent. I tried it all. I ended up drinking a lot of Michigan beer instead of tracking the performance of the watch. Sorry. Still, performance matters to a point. Here’s my OG review of the Garmin Fenix 5: The watch is significant even on my wrist. The screen is underwhelming though it’s always on and visibility improves in sunlight. The buttons have great tactical feedback. The watch is waterproof to the extent it survived a flipped kayak and hours in Lake Michigan. The battery lasts nearly a week. The watch does not know when it’s on or off the wrist, so notifications will cause it to buzz while it’s on your nightstand. But most of that doesn’t matter. The Garmin Fenix 5 is exceptional, and I love wearing it. Smartwatches need to be reviewed like ordinary watches. I need to explain more about how the watch feels rather than what it does or how it works. At this point, several years into smartwatches, it’s not notable if the smartwatch with a smartwatch. Of course, it tracks steps and heart rate and displays select notifications from my phone. If those items work then, they’re not important in a review. Take a Citizen Skyhawk line. It packs a highly sophisticated complication that’s designed, so the maker says, for pilots. Ball makes a lovely line intended to provide accurate timekeeping for train conductors. There are watches for high magnetic fields, tactical operators, racer car drivers and, of course, countless for divers. Here’s my point: The vast majority of these watches are not used by divers or train conductors or fighter pilots. This Garmin Fenix watch, much like the Apple Watch or Rolex diver, can be an aspirational item. It’s like the juicer in my kitchen or rowing machine in my basement. I got it because I wanted to be a person who woke up and juiced some veggies before my workout. I haven’t used either in months. Smartwatches are different from smartphones and need to be reviewed as such. This Garmin Fenix watch has many modes I would never use, yet I love the watch. There’s a base jumping mode. I’m not jumping off a cliff. There’s a tactical mode and a golf mode and an open water mode, and I have no desire to be in situations where I need to track such activities. But I like the thought of having them available if I ever wanted to monitor my heartbeat while shooting targets. The smartwatch industry is approaching a point where features are secondary to design. It’s expected that the watch will track steps and heartbeat while providing access to various features. It’s like the time and date of a regular watch. Past that, the watch needs to fit in a person’s aspirations. Everyone is different, but to me, this is how it is laid out: The Apple Watch is for those looking for the top-tier experience regardless of the downsides of constant charging and delicate exterior. Android Watches are those looking for something similar but in a counter-culture way. The Samsung’s smartwatch is interesting and with the new Galaxy Watch, finally reaching maturity. There are fashion smartwatches with fewer features but designs that make a statement. That’s where this Garmin watch lives and I’m okay with it. Fossil and Timex watches live here too. Using the Apple Watch as a standard, some of these fashion watches cost more, and some cost less, but they all say something an Apple Watch does not. I’m bored with the Apple Watch, and right now I’m into thinking I live the type of life that needs a smartwatch that tracks every aspect of a triathlon. I don’t need all these features, but I like to think I do. I also don’t need to have a GMT watch with a third timezone, and I don’t need a watch with a hacking movement hand as if I need to synchronize my watch with other members of my special forces squad. But I have those watches along with dive watches and anti-magnetic watches. I’m not alone. The watch industry has long existed on selling lifestyles. I was wrong before. The Apple Watch isn’t better than this Garmin or most other smartwatches— at least it’s not better for me right now. Maybe two weeks from now I’ll want to wear an Apple Watch and not because it’s better, but because it makes a different statement.
Sony’s 10″ Digital Paper Tablet is an ultra-light reading companion that needs to do more

Sony’s 10″ Digital Paper Tablet is an ultra-light reading companion that needs to do more

6:04pm, 9th August, 2018
Last year I had a good time . They both had their strengths and weaknesses, and one of the was that the thing was just plain big. They’ve remedied that with , the , and it’s just as useful as I expected. Which is to say: in a very specific way. Sony’s e-paper tablets are single-minded little gadgets: all they do is let you read and lightly mark up PDFs. If that sounds a mite too limited to you, you’re not the target demographic. But lots of people — including me — have to wade through tons of PDFs and it’s a pain to do so on a desktop or laptop. Who wants to read by hitting the down arrow 500 times? For legal documents and scientific journal articles, which I read a lot of, a big e-paper tablet is fantastic. But the truth is that the RP1, with its 13.3″ screen, was simply too big to carry around most of the time. The device is quite light, but took up too much space. So I was excited to check out the CP1, which really is just a smaller version of the same thing. To be honest, there’s not much I can add to my original review of the RP1: it handles PDFs easily, and now with improved page jumping and tagging, it’s easier to navigate them. And using the stylus, you can make some limited markup — but don’t try to do much more than mark a passage with an “OK” or a little star (one of several symbols the device recognizes and tracks the location of). It’s incredibly light and thin, and feels flexible and durable as well — not a fragile device at all. Its design is understated and functional. [gallery ids="1687652,1687655,1687656,1687657,1687653,1687654"] Writing isn’t the Sony tablets’ strong suit — that would be the reMarkable’s territory. While looping out a circle or striking through a passage is just fine, handwritten notes are a pain. The resolution, accuracy, and latency of the writing implement are as far as I can tell exactly as they were on the larger Sony tablet, which makes sense — the CP1 basically is a cutout of the same display and guts. PDFs display nicely, and the grid pattern on the screen isn’t noticeable for the most part. Contrast isn’t as good as the latest Kindles or Kobos (shots in the gallery above aren’t really flattering, since they’re so close up, but you get the idea), but it’s more than adequate and it beats reading a big PDF on a small screen like those or your laptop’s LCD. Battery life is excellent — it’ll go through hundreds of pages on a charge. A new mobile app supposedly makes transferring documents to the CP1 easy, but in reality I never found a reason to use it. I so rarely download PDFs — the only format the tablet reads — on my phone or tablet that it just didn’t make sense for me. Perhaps I could swap a few over that are already on my iPad, but it just didn’t strike me as particularly practical except perhaps in a few situations where my computer isn’t available. But that’s just me — people who work more from their phones might find this much more useful. Mainly I just enjoyed how light and simple the thing is. There’s almost no menu system to speak of and the few functions you can do (zooming in and such) are totally straightforward. Whenever I got a big document, like today’s FCC OIG report, or a set of upcoming scientific papers, my first thought was, “I’ll stick these on the Sony and read them on the couch.” Although I value its simplicity, it really could use a bit more functionality. A note-taking app that works with a Bluetooth keyboard, for instance, or synchronizing with your Simplenote or Pocket account. The reMarkable is still limited as well, but its excellent stylus (suitable for sketching) and cloud service help justify the price. I have to send this thing back now, which is a shame because it’s definitely a handy device. Of course, makes it rather a niche one as well — but perhaps it’s the kind of thing that fills out the budget of an IT upgrade or grant proposal.
Samsung announces Spotify as its go-to music partner

Samsung announces Spotify as its go-to music partner

11:34am, 9th August, 2018
Samsung didn’t just uneil new devices like , and of course at its Unpacked event this morning — it also announced a partnership with Spotify. The goal is to create a seamless cross-device listening experience on Samsung devices, including the ones announced today. As demonstrated on-stage, you should be able to start playing a song on your phone, then switch over to your TV, then over to your Galaxy Home. This integration will allow you to play on your Samsung Smart TV through the SmartThings app deepens the integration between Spotify and Samsung’s voice assistant Bixby, making Spotify the default choice whenever you ask Bixby to look for music. In addition, Spotify will become part of the set-up experience on Samsung devices. For Spotify, this partnership should mean more visibility, making it the preferred music experience on Samsung devices. And for Samsung, it highlights one of its differences compared to Apple, which has been as it rolls out new devices like the HomePod. Spotify CEO Daniel Ek took the stage at Unpacked to talk about the partnership, which he also discussed in . “We believe that this significant long-term partnership will provide Samsung users across millions of devices with the best possible music streaming experience, and make discovering new music easier than ever – with even more opportunities to come,” Ek said.
Fossil announces new update Android Wear watches with HR tracking, GPS

Fossil announces new update Android Wear watches with HR tracking, GPS

4:04pm, 8th August, 2018
Fossil’s Q line is an interesting foray by a traditional fashion watchmaker into the wearable world. Their latest additions to the line, the , add a great deal of Android Wear functionality to a watch that is reminiscent of Fossil’s earlier, simpler watches. In other words, these are some nice, low-cost smartwatches for the fitness fan. The original Q watches included . As the company expanded into wearables, however, they went Android Wear route and created a number of lower-powered touchscreen watches. Now, thanks to a new chipset, Fossil is able to add a great deal more functionality in a nice package. The Venture and the Explorist adds untethered GPS, NFC, heart rate, and 24 hour battery life. It also includes an altimeter and gyroscope sensor. The new watches start at $255 and run the Snapdragon Wear 2100 chip, an optimized chipset for fitness watches. The watch comes in multiple styles and with multiple bands and features 36 different faces including health and fitness-focused faces for the physically ambitious. The watch also allows you to pay with Pay – Apple Pay isn’t supported – and you can store content on the watch for runs or walks. It also tracks swims and is waterproof. The Venture and Explorist are 40mm and 45mm respectively and the straps are interchangeable. While they’re no $10,000 Swiss masterpiece, these things look – and work – pretty good.
Magic Leap One AR headset for devs costs more than 2x the iPhone X

Magic Leap One AR headset for devs costs more than 2x the iPhone X

9:34am, 8th August, 2018
It’s been a long and wait but mixed reality headgear maker Magic Leap will its first piece of hardware this summer. We were still waiting on the price-tag — but it’s just been officially revealed: The developer-focused ‘creator edition’ headset will set you back at least $2,295. So a considerable chunk of change — albeit is not intended as a mass market consumer device (although Magic Leap’s founder frothed about it being “at the border of practical for everybody” in an interview with the ) but rather an AR headset for developers to create content that could excite future consumers. Here we go. One Creator Edition is now available to purchase. So if you’re a , creator or explorer, join us as we venture deeper into the world of . Take the leap at — Magic Leap (@magicleap) A ‘Pro’ version of the kit — with an extra hub cable and some kind of rapid replacement service if the kit breaks — costs an additional $495, according to . While certain (possibly necessary) extras such as prescription lenses also cost more. So it’s pushing towards 3x iPhone Xes at that point. The augmented reality startup, which has raised at least $2.3 billion, according to , attracting a string of high profile investors including Google, Alibaba, Andreessen Horowitz and others, is only offering its first piece of reality bending eyewear to “creators in cities across the contiguous U.S.”. Potential buyers are asked to input their zip code via its to check if it will agree to take their money but it adds that “the list is growing daily”. We tried the TC SF office zip and — unsurprisingly — got an affirmative of delivery there. But any folks in, for example, Hawaii wanting to spend big to space out are out of luck for now… reports that the headset is only available in six U.S. cities at this stage: Chicago, Los Angeles, Miami, New York, San Francisco (Bay Area), and Seattle — with Magic Leap saying that “many” more will be added in fall. The company specifies it will “hand deliver” the package to buyers — and “personally get you set up”. So evidently it wants to try to make sure its first flush of expensive hardware doesn’t get sucked down the toilet of dashed developer expectations. It describes the computing paradigm it’s seeking to shift, i.e. with the help of enthused developers and content creators, as “spatial computing” — but it really needs a whole crowd of technically and creatively minded people to step with it if it’s going to successfully deliver that.