Google’s smart home sell looks cluttered and incoherent

Google’s smart home sell looks cluttered and incoherent

7:46am, 10th October, 2018
If any aliens or technology ingenues were trying to understand what on earth a ‘smart home’ is yesterday, via latest , they’d have come away with a pretty confused and incoherent picture. The company’s presenters attempted to sketch a vision of gadget-enabled domestic bliss but the effect was rather closer to described clutter-bordering-on-chaos, with existing connected devices being blamed (by Google) for causing homeowners’ device usability and control headaches — which thus necessitated another new type of ‘hub’ device which was now being unveiled, slated and priced to fix problems of the smart home’s own making. Meet the ‘Made by Google’ . Buy into the smart home, the smart consumer might think, and you’re going to be stuck shelling out again and again — just to keep on top of managing an ever-expanding gaggle of high maintenance devices. Which does sound quite a lot like throwing good money after bad. Unless you’re a true believer in the concept of gadget-enabled push-button convenience — and the perpetually dangled claim that smart home nirvana really is just around the corner. One additional device at a time. Er, and thanks to AI! Yesterday, at Google’s event, there didn’t seem to be any danger of nirvana though. Not unless paying $150 for a small screen lodged inside a speaker is your idea of heaven. (i.e. after you’ve shelled out for all the other connected devices that will form the spokes chained to this control screen.) A small tablet that, let us be clear, is defined by its limitations: , … No, it’s not supposed to be an entertainment device in its own right. It’s literally just supposed to sit there and be a visual control panel — with the usual also-accessible-on-any-connected-device type of content like traffic, weather and recipes. So $150 for a remote control doesn’t now does it? The hub doubling as a digital photo frame when not in active use — which Google made much of — isn’t some kind of ‘magic pixie’ sales dust either. Call it screensaver 2.0. A fridge also does much the same with a few magnets and bits of paper. Just add your own imagination. During the presentation, Google made a point of stressing that the ‘evolving’ smart home it was showing wasn’t just about iterating on the hardware front — claiming its Google’s AI software is hard at work in the background, hand-in-glove with all these devices, to really ‘drive the vision forward’. But if the best example it can find to talk up is AI auto-picking which photos to display on a digital photo frame — at the same time as asking consumers to shell out $150 for a discrete control hub to manually manage all this IoT — that seems, well, underwhelming to say the least. If not downright contradictory. Google also made a point of referencing concerns it said it’s heard from a large majority of users that they’re feeling overwhelmed by too much technology, saying: “We want to make sure you’re in control of your digital well-being.” Yet it said this at an event where it literally unboxed yet another clutch of connected, demanding, function-duplicating devices — that are also still, let’s be clear, just as — including the aforementioned tablet-faced speaker (which Google somehow tried to claim would help people “disconnect” from all their smart home tech — so, basically, ‘buy this device so you can use devices less’… ); a ChromeOS tablet that transforms into a laptop via a snap-on keyboard; and 2x versions of its new high end smartphone, the Pixel 3. There was even a wireless charging that props the phone up in a hub-style control position. (Oh and Google didn’t even have time to mention it during the cluttered presentation but there’s this Disney co-branded , presumably). What’s the average consumer supposed to make of all this incestuously overlapping, wallet-badgering hardware?! Smartphones at least have clarity of purpose — by being efficiently multi-purposed. Increasingly powerful all-in-ones that let you do more with less and don’t even require you to buy a new one every year vs the smart home’s increasingly high maintenance and expensive (in money and attention terms) sprawl, duplication and clutter. And that’s without even considering the security risks and privacy nightmare. The two technology concepts really couldn’t be further apart. If you value both your time and your money the smartphone is the one — the only one — to buy into. Whereas the smart home clearly needs A LOT of finessing — if it’s to ever live up to the hyped claims of ‘seamless convenience’. Or, well, a total rebranding. The ‘creatively chaotic & experimental gadget lovers’ home would be a more honest and realistic sell for now — and the foreseeable future. Instead Google made a pitch for what it dubbed the “thoughtful home”. Even as it pushed a button to pull up a motorised pedestal on which stood clustered another bunch of charge-requiring electronics that no one really needs — in the hopes that consumers will nonetheless spend their time and money assimilating redundant devices into busy domestic routines. Or else find storage space in already overflowing drawers. The various iterations of ‘smart’ in-home devices in the market illustrate exactly how experimental the entire concept remains. Just this week, waded in with a which, frankly speaking, looks like something you’d find in a prison warden’s office. Google, meanwhile, has housed speakers in all sorts of physical forms, quite a few of which resemble restroom scent dispensers — what could it be ? And now has so many Echo devices it’s almost impossible to keep up. It’s as if the ecommerce giant is just dropping stones down a well to see if it can make a splash. During the smart home bits of Google’s own-brand hardware pitch, the company’s parade of presenters often sounded like they were going through robotic motions, failing to muster anything more than baseline enthusiasm. And failing to dispel a strengthening sense that the smart home is almost pure marketing, and that sticking update-requiring, wired in and/or wireless devices with variously overlapping purposes all over the domestic place is the very last way to help technology-saturated consumers achieve anything close to ‘disconnected well-being’. Incremental convenience might be possible, perhaps — depending on which and how few smart home devices you buy; for what specific purpose/s; and then likely only sporadically, until the next problematic update topples the careful interplay of kit and utility. But the idea that the smart home equals thoughtful domestic bliss for families seems farcical. All this updatable hardware inevitably injects new responsibilities and complexities into home life, with the conjoined power to shift family dynamics and relationships — based on things like who has access to and control over devices (and any content generated); whose jobs it is to fix things and any problems caused when stuff inevitably goes wrong (e.g. a device breakdown OR an AI-generated snafu like the ‘wrong’ photo being auto-displayed in a communal area); and who will step up to own and resolve any disputes that arise as a result of all the Internet connected bits being increasingly intertwined in people’s lives, willingly or otherwise. Hey Google, is there an AI to manage all that yet?
Accion Systems takes on $3M in Boeing-led round to advance its tiny satellite thrusters

Accion Systems takes on $3M in Boeing-led round to advance its tiny satellite thrusters

5:36am, 10th October, 2018
the startup aiming to reinvent satellite propulsion with an innovative and tiny new thruster, has attracted significant investment from Boeing’s HorizonX Ventures. The $3 million round should give the company a bit of breathing room while it continues to prove and improve its technology. “Investing in startups with next-generation concepts accelerates satellite innovation, unlocking new possibilities and economics in Earth orbit and deep space,” said HorizonX Ventures managing director Brian Schettler in a press release. Accion, whose founder and CEO Natalya Bailey graced the stage of Disrupt just a few weeks ago, makes what’s called a “tiled ionic liquid electrospray” propulsion system, or TILE. This system is highly efficient and can be made the size of a postage stamp or much larger depending on the requirements of the satellite. Example of a TILE attached to a satellite chassis. The company has tested its tech in terrestrial facilities and in space, but it hasn’t been used for any missions just yet — though that may change soon. A pair of student-engineered cubesats equipped with TILE thrusters are scheduled to take off on RocketLab’s first big commercial payload launch, “It’s Business Time.” It’s been delayed a few times but early November is the next launch window, so everyone cross your fingers. Another launch scheduled for November is the IRVINE 02 cubesat, which will sport TILEs and go up aboard a Falcon 9 loaded with supplies for the International Space Station. The Boeing investment (Gettylab also participated in the round) doesn’t include any guarantees like equipping Boeing-built satellites with the thrusters. But the company is certainly already dedicated to this type of tech and the arrangement is characterized as a partnership — so it’s definitely a possibility. Natalya Bailey and Rob Coneybeer (Shasta Ventures) at Disrupt Berlin 2017. A Boeing representative told me that this is aimed to help Accion scale, and that the latter will have access to the former’s testing facilities and expertise. “We believe there will be many applications for Accion’s propulsion system, and will be monitoring and assessing the tech as it continues to mature,” they wrote in an email. I asked Accion what the new funding will be directed towards, but a representative only indicated that it would be used for the usual things: research, operations, staff expenses, and so on. Not some big skunk works project, then. The company’s last big round , when it raised $7.5 million.
The Salto-1P now does amazing targeted jumps

The Salto-1P now does amazing targeted jumps

4:36pm, 9th October, 2018
When we last met with it was bopping around like a crazed grasshopper. Now researchers have added targeting systems to the little creature, allowing it to maintain a constant hop while controlling exactly when and where Salto lands. Called “deadbeat foot placement hopping control” the Salto can now watch a surface for a target and essentially fly over to where it needs to land using built-in propellers. Researchers Duncan Haldane, Justin Yim and Ronald Fearing created the Salto as part of the The team upgraded Salto’s controller to make it far more precise on landing, a feat that was almost impossible using the previous controller system, SLIP. “The robot behaves more or less like a spring-loaded inverted pendulum, a simplified dynamic model that shows up often enough in both biology and robotics that it has its own acronym: SLIP,” wrote . “Way back in the 1980s, Marc Raibert developed a controller for SLIP-like robots, and people are still using it today, including Salto-1P up until just recently.”
Comparing Google Home Hub vs Amazon Echo Show 2 vs Facebook Portal

Comparing Google Home Hub vs Amazon Echo Show 2 vs Facebook Portal

2:26pm, 9th October, 2018
The war for the countertop has begun. Amazon and Facebook all revealed their new smart displays this month. Each hopes to become the center of your Internet of Things-equipped home and a window to your loved ones. The is a cheap and privacy-safe smart home controller. The gives Alexa a visual complement. And the offer a Smart Lens that automatically zooms in and out to keep you in frame while you video chat. For consumers, the biggest questions to consider are how much you care about privacy, whether you really video chat, which smart home ecosystem you’re building around and how much you want to spend. For the privacy obsessed, is the only one without a camera and it’s dirt cheap at $149. For the privacy agnostic, offers the best screen and video chat functionality. For the chatty, can do message and video chat over Alexa, call phone numbers and is adding Skype. If you want to go off-brand, there’s also the , with stylish hardware in a and a $199 8-inch 720p version. And for the audiophile, there’s the . While those hit the market earlier than the platform-owned versions we’re reviewing here, they’re not likely to benefit from the constant iteration Google, Amazon and Facebook are working on for their tabletop screens. Here’s a comparison of the top smart displays, including their hardware specs, unique software, killer features and pros and cons:
Here are all the details on the new Pixel 3, Pixel Slate, Pixel Stand, and Home Hub

Here are all the details on the new Pixel 3, Pixel Slate, Pixel Stand, and Home Hub

12:16pm, 9th October, 2018
At a special event in New York City, announced some of its latest, flagship hardware devices. During the hour-long press conference Google executives and product managers took the wraps off the company’s latest products and explained their features. Chief among the lot is the Pixel 3, Google’s latest flagship Android device. Like the 2 before it, the Pixel 3’s main feature is its stellar camera but there’s a lot more magic packed inside the svelte frame. Contrary to some earlier renders, the third version of Google’s flagship (spotted by 9 to 5 Google) does boast a sizable notch up top, in keeping with earlier images of the larger XL. Makes sense, after all, Google went out of its way to boast about notch functionality when it introduced Pie, the latest version of its mobile OS. The device is available for preorder today and will start shipping October 18, starting at $799. The larger XL starts at $899, still putting the product at less than the latest flagships from and Samsung. The device looks pretty much exactly like the leaks lead us to believe — it’s a premium slate with a keyboard cover that doubles as a stand. It also features a touch pad, which gives it the edge over products like Samsung’s most recent Galaxy Tab. There’s also a matching Google Pen, which appears to more or less be the same product announced around the Pixel Book, albeit with a darker paint job to match the new product. The product starts at $599, plus $199 for the keyboard and $99 for the new dark Pen. All three are shipping at some point later this year. The device looks like an Android tablet mounted on top of a speaker — which ought to address the backward firing sound, which is one of the largest design flaws of the recently introduced Echo Show 2. The speaker fabric comes in a number of different colors, in keeping with the rest of the Pixel/Home products, including the new Aqua. When not in use, the product doubles as a smart picture frame, using albums from Google Photos. A new Live Albums, which auto updates, based on the people you choose. So you can, say, select your significant others and it will create a gallery based on that person. Sweet and also potentially creepy. Machine learning, meanwhile, will automatically filter out all of the lousy shots. The Home Hub is up for pre-order today for a very reasonable $149. In fact, the device actually seems like a bit of a loss leader for the company in an attempt to hook people into the Google Assistant ecosystem. It will start shipping October 22. The Pixel Stand is basically a sleek little round dock for your phone. While it can obviously charge your phone, what’s maybe more interesting is that when you put your phone into the cradle, it looks like it’ll start a new notifications view that’s not unlike what you’d see on a smart display. It costs $79.
Review: The Marshall Woburn II packs modern sound, retro look

Review: The Marshall Woburn II packs modern sound, retro look

10:06am, 9th October, 2018
Marshall speakers stand out. That’s why I dig them. From the company’s headphones to its speakers, the audio is warm and full just like the classic design suggests. The company today is announcing revisions across its lines. The new versions of the Action ($249), Stanmore ($349) and Woburn Bluetooth ($499) speakers now feature Bluetooth 5.0, an upgraded digital signal processor and a slightly re-worked look. Marshall also announced a new version of the Minor wireless in-ear headphones. The wireless headphones were among the company’s first products and the updated version now features Bluetooth 5.0 aptX connectivity, new 14.2 mm drivers and 12 hours of battery life. Marshall also says the redesigned model will stay in place better than the original model. It’s important to note that the company behind these Marshall speakers and headphones is different from the company that makes the iconic guitar amp though there is collaboration. The Marshall brand is used by Zound Industries, which also operates Ubanears. The models produced by Zound Industries stay true to the Marshall brand. I’ve used several of the products since the company launched and I’m pleased to report that this new generation packs the magic of previous models. The company sent me the new Woburn II speaker (pictured above) and it’s a lovely speaker. This is the largest speaker in the company’s line. It’s imposing and, in Reddit-speak, an absolute unit. It’s over a foot tall and weighs just under 20 lbs. The speaker easily fills a room. The sound is warm and inviting. The Woburn II features a ported design which helps create the rich sound. Bass is deep though doesn’t pound. Mid-tones are lovely and the highs are perfectly balanced. If they’re not, there are nobs mounted on the top to adjust the tones. I find the Woburn a great speaker at any volume. Turn it down and the sound still feels as complex as it does at normal listen volumes. Crank the speaker to 10, drop the treble a bit, and the speaker will shake walls. Don’t be scared by the imposing size. The Woburn II can party, but it is seemingly just as happy to spend the evening in, playing some Iron and Wine. Sadly, the Woburn II lacks some of the magic of the original Woburn. The new version does not have an optical input and the power switch is a soft switch. It’s just for looks. The first Woburn had a two position switch. Click one way to turn on and click the other to turn off. It was an analog experience. This time around the speakers retain the switch, but the switch is different. It’s artificial and might as well be a power button. When pressed forward, the switch turns on the speaker and then snaps back to its original position. The clicking it gone. I know that seems like a silly thing to complain about but that switch was part of the Marshall experience. It felt authentic and now it feels artificial. Like past models, the speaker is covered in a vinyl-like material and the front of the speaker is covered in fabric. Don’t touch this fabric. It stains. The review sample sent to me came with stains already on the fabric. The Woburn II is a fantastic speaker with a timeless look. At $499 it’s pricy but produces sound above its price-point rivals. I expect the same performance out of updated Action II and Stanmore II speakers. These speakers are worthy of the Marshall name.
The Casio Rangeman GPR-B1000 is a big watch for big adventures

The Casio Rangeman GPR-B1000 is a big watch for big adventures

2:36pm, 8th October, 2018
The is comically large. That’s the first thing you notice about it. Based on the G-Shock design, this massive watch is 20.2mm thick and about 60mm in diameter, a true dinner plate of a watch. Inside the heavy case is a dense collection of features that will make your next outdoor adventure great. GPR-B1000, which I took for an extended trip through Utah and Nevada, is an outdoor marvel. It has all of the standard hiking watch features including compass, barometer, altimeter, and solar charging, but the watch also has built-in GPS mapping, logging, and backtracking. This means you can set a destination and the watch will lead you and you can later use your GPS data to recreate your trek or even backtrack out of a sticky situation. This is not a sports watch. It won’t track your runs or remind you to go to your yoga class. Instead it’s aimed at the backwoods hiker or off piste skier who wants to get from Point A to Point B without getting lost. The watch connects to a specialized app that lets you set the destinations, map your routes, and even change timezones when the phone wakes up after a flight. These odd features make this a traveler’s dream. The watch design is also unique for Casio. Instead of a replaceable battery the device charges via sunlight or with an included wireless charger. It has a ceramic caseback – a first for Casio – and the charger fits on like a plastic parasite. It charges via micro USB. It has a crown on the side that controls scrolling through various on-screen menus and the rest of the functions are accessed easily from dedicated buttons around the bezel. The watch is mud- and water-proof to 200 meters and it can survive in minus 20 degrees Celsius temperatures. It is also shock resistant. The $800 GPR-B1000 is a beefy watch. It’s not for the faint of wrist and definitely requires a bit of dedication to wear. I loved it while hiking up and down canyons and mountains and it was an excellent travel companion. One of the coolest features is quite simply being able to trust that the timezone is correct as soon as you land in Europe from New York. That said you should remember that this watch is for “Adventure Survival” as Casio puts it. It’s not a running watch and it’s not a fashion piece. At $800 it’s one of Casio’s most expensive G-Shocks and it’s also the most complex. If you’re an avid hiker, however, the endless battery, GPS, and trekking features make it a truly valuable asset. [gallery ids="1728822,1728820,1728819"]
D-Wave offers the first public access to a quantum computer

D-Wave offers the first public access to a quantum computer

6:19am, 6th October, 2018
Outside the crop of construction cranes that now dot Vancouver’s bright, downtown greenways, in a suburban business park that reminds you more of dentists and tax preparers, is a small office building belonging to . This office — squat, angular and sun-dappled one recent cool Autumn morning — is unique in that it contains an infinite collection of parallel universes. Founded in 1999 by Geordie Rose, D-Wave worked in relative obscurity on esoteric problems associated with quantum computing. When Rose was a PhD student at the University of British Columbia, he turned in an assignment that outlined a quantum computing company. His entrepreneurship teacher at the time, Haig Farris, found the young physicists ideas compelling enough to give him $1,000 to buy a computer and a printer to type up a business plan. The company consulted with academics until 2005, when Rose and his team decided to focus on building usable quantum computers. The result, the Orion, launched in 2007, and was used to classify drug molecules and play Sodoku. The business now sells computers for up to $10 million to clients like Google, Microsoft and Northrop Grumman. “We’ve been focused on making quantum computing practical since day one. In 2010 we started offering remote cloud access to customers and today, we have 100 early applications running on our computers (70 percent of which were built in the cloud),” said CEO Vern Brownell. “Through this work, our customers have told us it takes more than just access to real quantum hardware to benefit from quantum computing. In order to build a true quantum ecosystem, millions of developers need the access and tools to get started with quantum.” Now their computers are simulating weather patterns and tsunamis, optimizing hotel ad displays, solving complex network problems and, thanks to a new, open-source platform, could help you ride the quantum wave of computer programming. Inside the box When I went to visit D-Wave they gave us unprecedented access to the inside of one of their quantum machines. The computers, which are about the size of a garden shed, have a control unit on the front that manages the temperature as well as queuing system to translate and communicate the problems sent in by users. Inside the machine is a tube that, when fully operational, contains a small chip super-cooled to 0.015 Kelvin, or -459.643 degrees Fahrenheit or -273.135 degrees Celsius. The entire system looks like something out of the Death Star — a cylinder of pure data that the heroes must access by walking through a little door in the side of a jet-black cube. It’s quite thrilling to see this odd little chip inside its super-cooled home. As the computer revolution maintained its predilection toward room-temperature chips, these odd and unique machines are a connection to an alternate timeline where physics is wrestled into submission in order to do some truly remarkable things. And now anyone — from kids to PhDs to everyone in-between — can try it. Into the ocean Learning to program a quantum computer takes time. Because the processor doesn’t work like a classic universal computer, you have to train the chip to perform simple functions that your own cellphone can do in seconds. However, in some cases, researchers have found the chips can outperform classic computers by 3,600 times. This trade-off — the movement from the known to the unknown — is why D-Wave exposed their product to the world. “We built Leap to give millions of developers access to quantum computing. We built the first quantum application environment so any software developer interested in quantum computing can start writing and running applications — you don’t need deep quantum knowledge to get started. If you know Python, you can build applications on Leap,” said Brownell. To get started on the road to quantum computing, D-Wave built the Leap platform. The is an open-source toolkit for developers. When you sign up you receive one minute’s worth of quantum processing unit time which, given that most problems run in milliseconds, is more than enough to begin experimenting. A queue manager lines up your code and runs it in the order received and the answers are spit out almost instantly. You can code on the QPU with Python or via , and it allows you to connect to the QPU with an API token. After writing your code, you can send commands directly to the QPU and then output the results. The programs are currently pretty esoteric and require a basic knowledge of quantum programming but, it should be remembered, classic computer programming was once daunting to the average user. I downloaded and ran most of the demonstrations without a hitch. These demonstrations — factoring programs, network generators and the like — essentially turned the concepts of classical programming into quantum questions. Instead of iterating through a list of factors, for example, the quantum computer creates a “parallel universe” of answers and then collapses each one until it finds the right answer. If this sounds odd it’s because it is. The researchers at D-Wave argue all the time about how to imagine a quantum computer’s various processes. One camp sees the physical implementation of a quantum computer to be simply a faster methodology for rendering answers. The other camp, itself aligned with Professor David Deutsch’s ideas presented in , sees the sheer number of possible permutations a quantum computer can traverse as evidence of parallel universes. What does the code look like? It’s hard to read without understanding the basics, a fact that D-Wave engineers factored for in offering online documentation. For example, below is most of the factoring code for one of their demo programs, a bit of code that can be reduced to about five lines on a classical computer. However, when this function uses a quantum processor, the entire process takes milliseconds versus minutes or hours. Classical # Python Program to find the factors of a number define a function def print_factors(x): This function takes a number and prints the factors print(“The factors of”,x,”are:”)for i in range(1, x + 1):if x % i == 0:print(i) change this value for a different result. num = 320 uncomment the following line to take input from the user #num = int(input(“Enter a number: “)) print_factors(num) Quantum @qpu_hadef factor(P, use_saved_embedding=True): #################################################################################################### get circuit #################################################################################################### construction_start_time = time.time() validate_input(P, range(2 ** 6)) get constraint satisfaction problem csp = dbc.factories.multiplication_circuit(3) get binary quadratic model bqm = dbc.stitch(csp, min_classical_gap=.1) we know that multiplication_circuit() has created these variables p_vars = [‘p0’, ‘p1’, ‘p2’, ‘p3’, ‘p4’, ‘p5’] convert P from decimal to binary fixed_variables = dict(zip(reversed(p_vars), “{:06b}”.format(P)))fixed_variables = {var: int(x) for(var, x) in fixed_variables.items()} fix product qubits for var, value in fixed_variables.items():bqm.fix_variable(var, value) log.debug(‘bqm construction time: %s’, time.time() – construction_start_time) #################################################################################################### run problem #################################################################################################### sample_time = time.time() get QPU sampler sampler = DWaveSampler(solver_features=dict(online=True, name=’DW_2000Q.*’))_, target_edgelist, target_adjacency = sampler.structure if use_saved_embedding: load a pre-calculated embedding from factoring.embedding import embeddingsembedding = embeddings[sampler.solver.id]else: get the embedding embedding = minorminer.find_embedding(bqm.quadratic, target_edgelist)if bqm and not embedding:raise ValueError(“no embedding found”) apply the embedding to the given problem to map it to the sampler bqm_embedded = dimod.embed_bqm(bqm, embedding, target_adjacency, 3.0) draw samples from the QPU kwargs = {}if ‘num_reads’ in sampler.parameters:kwargs[‘num_reads’] = 50if ‘answer_mode’ in sampler.parameters:kwargs[‘answer_mode’] = ‘histogram’response = sampler.sample(bqm_embedded, **kwargs) convert back to the original problem space response = dimod.unembed_response(response, embedding, source_bqm=bqm) sampler.client.close() log.debug(’embedding and sampling time: %s’, time.time() – sample_time) “The industry is at an inflection point and we’ve moved beyond the theoretical, and into the practical era of quantum applications. It’s time to open this up to more smart, curious developers so they can build the first quantum killer app. Leap’s combination of immediate access to live quantum computers, along with tools, resources, and a community, will fuel that,” said Brownell. “For Leap’s future, we see millions of developers using this to share ideas, learn from each other and contribute open-source code. It’s that kind of collaborative developer community that we think will lead us to the first quantum killer app.” The folks at D-Wave created a number of tutorials as well as a forum where users can learn and ask questions. The entire project is truly the first of its kind and promises unprecedented access to what amounts to the foreseeable future of computing. I’ve seen lots of technology over the years, and nothing quite replicated the strange frisson associated with plugging into a quantum computer. Like the teletype and green-screen terminals used by the early hackers like Bill Gates and Steve Wozniak, D-Wave has opened up a strange new world. How we explore it us up to us.
Mars Rover Curiosity is switching brains so it can fix itself

Mars Rover Curiosity is switching brains so it can fix itself

4:23pm, 4th October, 2018
When you send something to space, it’s good to have redundancy. Sometimes you want to send two whole duplicate spacecraft just in case — as was the case with Voyager — but sometimes it’s good enough to have two of critical components. Rover Curiosity is no exception, and it is now in the process of switching from one main “brain” to the other so it can do digital surgery on the first. landed on Mars with two central computing systems, Side-A and Side-B (not left brain and right brain — that would invite too much silliness). They’re perfect duplicates of each other, or were — it was something of a bumpy ride, after all, and cosmic radiation may flip a bit here and there. The team was thankful to have made these preparations when, on sol 200 in February of 2013 (we’re almost to sol 2,200 now), the Side-A computer that ended up taking the whole rover offline. The solution was to swap over to Side-B, which was up and running shortly afterwards and sending diagnostic data for its twin. Having run for several years with no issues, Side-B is now, however, having its own problems. Since September 15 it has been unable to record mission data, and it doesn’t appear to be a problem that the computer can solve itself. Fortunately, in the intervening period, Side-A has been fixed up to working condition — though it has a bit less memory than it used to, since some corrupted sectors had to be quarantined. “We spent the last week checking out Side A and preparing it for the swap,” said Steven Lee, deputy project manager of the Curiosity program at JPL, . “We are operating on Side A starting today, but it could take us time to fully understand the root cause of the issue and devise workarounds for the memory on Side B. It’s certainly possible to run the mission on the Side-A computer if we really need to. But our plan is to switch back to Side B as soon as we can fix the problem to utilize its larger memory size.” No timeline just yet for how that will happen, but the team is confident that they’ll have things back on track soon. The mission isn’t in jeopardy — but this is a good example of how a good system of redundancies can add years to the life of space hardware.
This autonomous spray-painting drone is a 21st-century tagger’s dream

This autonomous spray-painting drone is a 21st-century tagger’s dream

12:03pm, 4th October, 2018
Whenever I see an overpass or billboard that’s been tagged, I worry about the tagger and the danger they exposed themselves to in order to get that cherry spot. developed by ETH Zurich and Disney Research will take some of the danger out of the hobby. It could also be used for murals and stuff, I guess. Although it seems an obvious application in retrospect, there just isn’t a lot of drone-based painting being done out there. Consider: a company could shorten or skip the whole scaffolding phase of painting a building or advertisement, leaving the bulk of painting to a drone. Why not? There just isn’t a lot of research into it yet, and like so many domain-specific applications, the problem is deceptively complex. This paper only establishes the rudiments of a system, but the potential is clearly there. The drone used by the researchers is a DJI Matrice 1002, customized to have a sensing rig mounted on one side and a spraying assembly on the other, counterbalancing each other. The sprayer, notably, is not just a nozzle but a pan-and-tilt mechanism that allows details to be painted that the drone can’t be relied on to make itself. To be clear we’re still talking broad strokes here, but accurate to an inch rather than three or four. It’s also been modified to use wired power and a constant supply of paint, which simplifies the physics and also reduces limits on the size of the surface to be painted. A drone lugging its own paint can wouldn’t be able to fly far, and its thrust would have to be constantly adjusted to account for the lost weight of sprayed paint. See? Complex. The first step is to 3D scan the surface to be painted; this can be done manually or via drone. The mesh is then compared to the design to be painted and a system creates a proposed path for the drone. Lastly the drone is set free to do its thing. It doesn’t go super fast in this prototype form, nor should it, since even the best drones can’t stop on a dime, and tend to swing about when they reduce speed or change direction. Slow and steady is the word, following a general path to put the nozzle in range of where it needs to shoot. All the while it is checking its location against the known 3D map of the surface so it doesn’t get off track. In case you’re struggling to see the “bear,” it’s standing up with its paws on a tree. That took me a long time to see so I thought I’d spare you the trouble. Let’s be honest: this thing isn’t going to do much more complicated than some line work or a fill. But for a lot of jobs that’s exactly what’s needed — and it’s often the type of work that’s the least suited to skilled humans, who would rather be doing stuff only they can do. A drone could fill in all the easy parts on a building and then the workers can do the painstaking work around the windows or add embellishments and details. For now this is strictly foundational work — no one is going to hire this drone to draw a Matterhorn on their house — but there’s a lot of potential here if the engineering and control methods can be set down with confidence.
Despite objection, Congress passes bill that lets U.S. authorities shoot down private drones

Despite objection, Congress passes bill that lets U.S. authorities shoot down private drones

7:43am, 4th October, 2018
U.S. authorities will soon have the authority to shoot down private drones if they are considered a threat — a move decried by civil liberties and rights groups. The Senate on Wednesday, months after an earlier House vote in April. The bill renews funding for the Federal Aviation Administration (FAA) until 2023, and includes several provisions designed to modernize U.S aviation rule — from making commercial flights more comfortable for passengers to including new provisions to act against privately owned drones. But critics say the new authority that gives the government the right to “disrupt,” “exercise control,” or “seize or otherwise confiscate” drones that’s deemed a “credible threat” is dangerous and doesn’t include enough safeguards. Federal authorities would not need to first obtain a warrant, which rights groups say that authority could be easily abused, making it possible for Homeland Security and the Justice Department and its various law enforcement and immigration agencies to shoot down anyone’s drone for any justifiable reason. Drones, or unmanned aerial vehicles, have rocketed in popularity, by amateur pilots and explorers to journalists using drones to report from the skies. But there’s also been a growing accidentally crashing a drone on the grounds of the White House to on the battlefield. Both the and the Electronic Frontier Foundation have denounced the bill. “These provisions give the government virtually carte blanche to surveil, seize, or even shoot a drone out of the sky — whether owned by journalists or commercial entities — with no oversight or due process,” an ACLU spokesperson told TechCrunch. “They grant new powers to the Justice Department and the Department of Homeland Security to spy on Americans without a warrant,” and they “undermine the use of drones by journalists, which have enabled reporting on critical issues like hurricane damage and protests at Standing Rock.” “Flying of drones can raise security and privacy concerns, and there may be situations where government action is needed to mitigate these threats,” the ACLU said . “But this bill is the wrong approach.” The EFF agreed, arguing the bill endangers the First and Fourth Amendment rights of freedom of speech and the protection from warrantless device seizures. “If lawmakers want to give the government the power to hack or destroy private drones, then Congress and the public should have the opportunity to debate how best to provide adequate oversight and limit those powers to protect our right to use drones for journalism, activism, and recreation,” the EFF said. Other privacy groups, including the Electronic Privacy Information Center, of the bill without “baseline privacy safeguards.” The bill will go to the president’s desk, where it’s expected to be signed into law.
Juul files lawsuit against other e-cig makers for patent infringement

Juul files lawsuit against other e-cig makers for patent infringement

5:33am, 4th October, 2018
Juul Labs today filed a complaint with the United States International Trade Commission (ITC) claiming that several organizations are infringing on Juul Labs’ patents. Juul has asked the ITC to halt the importation, distribution and sale of these products in the U.S. In all, eighteen entities are listed within the complaint as having infringed Juul patents. They predominantly hail from within the U.S. and China, with one based in France, according to the complaint. Earlier this year, Juul Labs filed for trademark infringement against which were allegedly using the Juul design or name brand. Obviously, competition is one reason to take legal action, but Juul has other priorities. The company is under an immense amount of scrutiny by the FDA and lawmakers with regards to underage usage of the product. Counterfeit products are often sold without any age verification, putting electronic nicotine delivery systems in the hands of yet more minors. From the release: Whereas Juul Labs implements strict manufacturing and quality controls during the manufacturing of its products, little is known about how most of the accused devices are manufactured. Similarly, whereas Juul Labs applies strict age-gating when selling its products through its website, many of the accused products appear to be sold with little or no real age-verification processes. Notably, in contrast to Juul’s products, many of the accused copy-cat products include inappropriate flavors, seemingly directed to attract underage users – flavors like “Bubble Bubble,” “Apple Juice” and “Sour Gummy.” In mid-September, FDA Commissioner Scott Gottlieb announced that Juul and other vape makers would have to come up with a more robust, comprehensive plan to combat underage use of the products. That 60-day period is about half-way complete. It’s unclear what the consequences will be for a plan that doesn’t meet the FDA’s satisfaction, but there has been plenty of talk about banning flavored liquids, which would be a severe blow to Juul and other e-cig companies.
Kobo’s Forma e-reader takes on Kindle Oasis with an asymmetric design and premium price

Kobo’s Forma e-reader takes on Kindle Oasis with an asymmetric design and premium price

4:34pm, 3rd October, 2018
is a complete about-face from its anonymous, cheap, and highly practical ; the Forma is big, expensive, and features a bold — not to say original — design. It’s clearly meant to take on the Kindle Oasis and e-reader fans for whom price is no object. The $280 Forma joins a number of other e-readers in using a one-handed design, something which is, we might as admit up front, isn’t for everyone. That said, I’ve found that my reading style on these devices has been able to adapt from one form factor to another — it’s not like they made it head-mountable or something. You still hold it like you would any other small device. It uses an 8-inch E-Ink Carta display with 300 pixels per inch, which is more than enough for beautiful type. The frontlight — essentially a layer above the display that lights up and bounces light off it to illuminate the page — is a specialty, adjustable from very cold to very warm in cast and everywhere in between. The Clara HD, Kobo’s best entry-level device, left, and the Forma. (The color cast of the screens is adjustable.) The screen will be very similar to that of the , Kobo’s previous high-end reader, but the Forma’s asymmetric design gives it slightly closer to square dimensions. Where it differs from the Kindle Oasis is in size and a couple important particulars of design. The Forma is slightly larger, by about 20 millimeters (3/4″ or so) in height and width, and is ever so slightly but not noticeable thicker. (I didn’t have one to compare on hand, unfortunately.) It’s also worth saying that like all Kobo devices, there are no forced advertisements on this one and you can load your own books as easy as that. To me Kindles aren’t even an option any more because of the “special offers” and limited file support. Chin or ear? The shape is similar, as anyone can see, but the Kobo team decided to go against having a flush front side and instead give the device a “chin,” as we used to call it on HTC phones, though being on the side it would perhaps more accurately be termed an “ear.” The screen, of course, is flat, but the grip on the side rises up from it at a 15 degree angle or so. Is this better or worse than having a flush front? Aesthetically I prefer the flush screen but practically speaking it is better to have a flat back so it lies flat when you put it down or prop it against something. That the Oasis sits at a tilt when you set it down on a table is something that bothers me. (I’m very sensitive, as you can tell.) [gallery ids="1725793,1725785,1725792,1725787,1725788,1725786"] It’s still very light, only 30 grams more than the Clara, the same amount less than the Aura One, and nearly equal to the Oasis. Despite being larger than any of those, it’s no less portable. That said, the Clara will fit in my back pocket, and this one most definitely will not. The device is fully waterproof, like the Oasis, although liquid on the screen can disrupt touch functionality (this is just a physics thing). Nothing to worry about, just wipe it off. The USB port is just wide open, but obviously it’s been sealed off inside. Don’t try charging it underwater. I am worried about the material the grip is made of: a satin-finish plastic that’s very nice to the touch but tends to attract fingerprints and oils. Look, everyone has oils. But the grip of the Forma won’t let you forget it. Although the power button is mushy and difficult to tell if you’ve pressed it right, the page-turn buttons are pleasantly clicky, and despite their appearance of being lever-like, they can easily be pressed anywhere along their length. Which goes forward and which backward switches automatically if you flip the reader over to use the other hand. This flipping process happens more or less instantaneously, with a rare exceptions in my brief testing. Neither side feels more “correct,” for instance because of the weight distribution or anything. The only one that doesn’t feel correct is the landscape mode. I’m not sure why someone would want to read this way, though I’m sure a few will like it. It just seems like a missed opportunity. Why can’t I have two pages displayed side by side, like a little pocket paperback? I’d love that! I’ve already asked Kobo about this and I assume that because I have done so, they will add it. As it is most books simply feel strange in this mode. Familiar software, unfamiliar price Text handling seems unchanged from Kobo’s other devices, which means it’s just fine — the typefaces are good and there are lots of options to adjust it to your taste book by book. Kobo’s much-appreciated drag-and-drop book adding and support for over a dozen formats (epub, cbr, mobi, etc) is here as well with no changes. Pocket integration is solid and extremely useful. The Forma (like Kobo’s other readers) does have Overdrive support, meaning that with a library card and account there you can easily request and read books from your local branch’s virtual stock. This is an underutilized service in general (by me as well) and I need to take advantage of it more. So far, so good. But the real question is whether this thing is worth the $280 they’re charging for it — $30 more than the Kindle Oasis and even an even bigger jump over the Aura One. In my honest opinion, for most people, the answer is no. For the dollar you get a lot more from the Clara HD, which also has the advantage of being compact and pocketable. But it must be said that the Forma is clearly a niche device aimed at people who use their e-reader a lot and want that bigger screen, the waterproofing, the thin profile, the one-handed design. There’s a smaller, but not necessarily small, number of people who are willing to pay for that. As it is the Forma is among the most expensive e-readers out there and it’s hard to justify that price for ordinary people who just want a good reader with the warmth control and good type. The Forma is successful at what it aims to do — provide a credible competitor to Amazon’s most expensive device, and beat it at its own game in the ways Kobo usually beats Kindle. That much I can say for certain. Whether to buy it is between you and your wallet. Pre-orders start October 16.
The Freewrite Traveler offers distraction-free writing for the road

The Freewrite Traveler offers distraction-free writing for the road

10:15am, 2nd October, 2018
If you’ve ever tried to write something long – a thesis, a book, or a manifesto outlining your disappointment in the modern technocracy and your plan to foment violent revolution – you know that distractions can slow you down or even stop the creative process. That’s why the folks at Astrohaus created the , a distraction-free typewriter, and it’s always why they are launching the , a laptop-like word processor that’s designed for writing and nothing else. The product, which I saw last week, consists of a hearty, full-sized keyboard and an E ink screen. There are multiple “documents” you can open and close and the system autosaves and syncs to services like automatically. The laptop costs $279 on Indiegogo and will have a retail price of $599. The goal of the Freewrite is to give you a place to write. You pull it out of your bag, open it, and start typing. That’s it. There are no Tweets, Facebook sharing systems, or games. It lasts for four weeks on one charge – a bold claim but not impossible – and there are some improvements to the editing functions including virtual arrow keys that let you move up and down in a document as you write. There are also hotkeys to bring up ancillary information like outlines, research, or notes. If the Traveler is anything like the original Freewrite then you can expect some truly rugged hardware. I tested an early model and the entire thing was built like a tank or, more correctly, like a Leica. Because it is aimed at the artistic wanderer, the entire thing weighs two pounds and is about as big as the . Is it for you? Well, if you liked the original or even missed the bandwagon when it first launched, you might really enjoy the Traveler. Because it is small and light it could easily become a second writing device for your more creative work that you pull out in times of pensive creativity. It is not a true word processor replacement, however, and it is a “first-thought-best-thought” kind of tool, allowing you to get words down without much fuss. I wouldn’t recommend it for research-intensive writing but you could easily sketch out almost any kind of document on the Traveler and then edit it on a real laptop. There aren’t many physical tools to support distraction-free writing. Some folks, myself included, have used the infamous , a crazy old word processor used by students or simply set up laptops without a Wi-Fi connection. The Freewrite Traveler takes all of that to the next level by offering the simplest, clearest, and most distraction-free system available. Given it’s 50% off right now on Indiegogo it might be the right time to take the plunge.
The Das Keyboard 5Q adds IoT to your I/O keys

The Das Keyboard 5Q adds IoT to your I/O keys

10:29am, 1st October, 2018
Just when you thought you were safe from IoT on your keyboard has come out with the 5Q, a smart keyboard that can send you notifications and change colors based on the app you’re using. These kinds of keyboards aren’t particularly new – you can find gaming keyboards that light up all the colors of the rainbow. But the 5Q is almost completely programmable and you can connect to the automation services or Zapier. This means you can do things like blink the Space Bar red when someone passes your Nest camera or blink the Tab key white when the outdoor temperature falls below 40 degrees. You can also make a key blink when someone Tweets which could be helpful or frustrating: The $249 keyboard is delightfully rugged and the switches – called and made by Das Keyboard – are nicely clicky but not too loud. The keys have a bit of softness to them at the half-way point so if you’re used to Cherry-style keyboards you might notice a difference here. That said the keys are rated for 100 million actuations, far more than any competing switch. The RGB LEDs in each key, as you can see below, are very bright and visible but when the keys lights are all off the keyboard is completely unreadable. This, depending on your desire to be , is a feature or a bug. There is also a media control knob in the top right corner that brings up the Q app when pressed. The entire package is nicely designed but the 5Q begs the question: do you really need a keyboard that can notify you when you get a new email? The Mac version of the software is also a bit buggy right now but they are updating it constantly and I was able to install it and run it without issue. Weird things sometimes happen, however. For example currently my Escape and F1 keys are now blinking red and I don’t know how to turn them off. That said, makes great keyboards. They’re my absolute favorite in terms of form factor and key quality and if you need a keyboard that can notify you when a cryptocurrency goes above a certain point or your Tesla stock is about to tank, look no further than the 5Q. It’s a keyboard for hackers by hackers and, as you can see below, the color transitions are truly mesmerizing. My keyboard glows — John Biggs (@johnbiggs)
Two weeks with a $16,000 Hasselblad kit

Two weeks with a $16,000 Hasselblad kit

10:53am, 29th September, 2018
For hobbyist photographers like myself, Hasselblad has always been the untouchable luxury brand reserved for high-end professionals. To fill the gap between casual and intended photography, they released the X1D — a compact, mirrorless medium format. Last summer when Stefan Etienne the newly released camera, I asked to take a picture. After importing the raw file into Lightroom and flipping through a dozen presets, I joked that I would eat Ramen packets for the next year so I could buy this camera. It was that impressive. XCD 3.5/30mm lens Last month Hasselblad sent us the XCD 4/21mm (their latest ultra wide-angle lens) for a two-week review, along with the X1D body and XCD 3,2/90mm portrait lens for comparison. I wanted to see what I could do with the kit and had planned the following: Swipe right on everyone with an unflattering Tinder profile picture and offer to retake it for them Travel somewhere with spectacular landscapes My schedule didn’t offer much time for either, so a weekend trip to the cabin would have to suffice. [gallery type="slideshow" link="none" columns="1" size="full" ids="1722181,1722182,1722183,1722184,1722185,1722186,1722187,1722188,1722201"] As an everyday camera The weekend upstate was rather quiet and uneventful, but it served to be the perfect setting to test out the camera kit because the X1D is slow A. F. It takes approximately 8 seconds to turn on, with an additional 2-3 seconds of processing time after each shutter click — top that off with a slow autofocus, slow shutter release and short battery life (I went through a battery within a day, approximately 90 shots fired). Rather than reiterating Stefan’s review, I would recommend reading it for full specifications. Coming from a Canon 5D Mark IV, I’m used to immediacy and a decent hit rate. The first day with the Hasselblad was filled with constant frustration from missed moments, missed opportunities. It felt impractical as an everyday camera until I shifted toward a more deliberate approach — reverting back to high school SLR days when a roll of film held a limited 24 exposures. When I took pause, I began to appreciate the camera’s details: a quiet shutter, a compact but sturdy body and an intuitive interface, including a touchscreen LCD display/viewfinder. [gallery type="slideshow" link="none" columns="1" size="full" ids="1722796,1722784,1722775"] Nothing looks or feels cheap about the Swiss-designed, aluminum construction of both the body and lenses. It’s heavy for a mirrorless camera, but it feels damn good to hold. XCD 4/21mm lens [gallery type="slideshow" link="none" columns="1" size="full" ids="1722190,1722191,1722489,1722490"] Dramatic landscapes and cityscapes without an overly exaggerated perspective — this is where the XCD 4/21mm outperforms other super wide-angle lenses. With a 105° angle of view and 17mm field of view equivalent on a full-framed DSLR, I was expecting a lot more distortion and vignetting, but the image automatically corrected itself and flattened out when imported into Lightroom. The latest deployment of Creative Cloud has the Hasselblad (camera and lens) profile integrated into Lightroom, so there’s no need for downloading and importing profiles. Oily NYC real estate brokers should really consider using this lens to shoot their dinky 250 sq. ft. studio apartments to feel grand without looking comically fish-eyed. XCD 3,2/90mm lens The gallery below was shot using only the mirror’s vanity lights as practicals. It was also shot underexposed to see how much detail I could pull in post. , so you don’t have to wait for each 110mb file to load. [gallery type="slideshow" link="none" columns="1" size="full" ids="1722193,1722194,1722195,1722196"] I’d like to think that if I had time and was feeling philanthropic, I could fix a lot of love lives on Tinder with this lens. Where it shines Normally, images posted in reviews are unedited, but I believe the true test of raw images lies in post-production. This is where the X1D’s slow processing time and quick battery drainage pays off. With the camera’s giant 50 MP 44 x 33mm CMOS sensor, each raw file was approximately 110mb (compared to my Mark IV’s 20-30mb) — that’s a substantial amount of information packed into 8272 x 6200 pixels. and : While other camera manufacturers tend to favor certain colors and skin tones, Dan Wang, a Hasselblad rep, told me, “We believe in seeing a very natural or even palette with very little influence. We’re not here to gatekeep what color should be. We’re here to give you as much data as possible, providing as much raw detail, raw color information that allows you to interpret it to your extent.” As someone who enjoys countless hours tweaking colors, shifting pixels and making things pretty, I’m appreciative of this. It allows for less fixing, more creative freedom. Who is this camera for? My friend Peter, a fashion photographer (he’s done editorial features for Harper’s Bazaar, Cosmopolitan and the likes), is the only person I know who shoots on Hasselblad, so it felt appropriate to ask his opinion. “It’s for pretentious rich assholes with money to burn,” he snarked. I disagree. The X1D is a solid step for Hasselblad to get off heavy-duty tripods and out of the studio. At this price point though, one might expect the camera to do everything, but it’s aimed at a narrow demographic: a photographer who is willing to overlook speediness for quality and compactibility. With smartphone companies like Apple and Samsung stepping up their camera game over the past few years, the photography world feels inundated with inconsequential, throw-away images (self-indulgent selfies, “look what I had for lunch,” OOTD…). My two weeks with the Hasselblad was a kind reminder of photography as a methodical art form, rather than a spray and pray hobby. Reviewed kit runs $15,940, pre-taxed: : $8,995.00 (currently on sale at BH for $6,495.00) : $3,750.00 :” $3,195.00
California cops bust crime ring that nabbed $1M worth of devices from Apple Stores

California cops bust crime ring that nabbed $1M worth of devices from Apple Stores

7:57pm, 27th September, 2018
Fear not, citizens — the law enforcement apparatus of California has apprehended or is hot on the trail of more than a dozen hardened criminals who boldly stole from the state’s favorite local business: . Their unconscionable larceny amounted to more than a million dollars’ worth of devices stolen from Apple Stores — the equivalent of hundreds of iPhones. The alleged thieves would wear hoodies into Apple stores — already suspicious, I know — and there they would snatch products on display and hide them in the ample pockets of those garments. Truly cunning. These crimes took place in 19 different counties in California, the police forces of which all collaborated to bring the perpetrators to justice, though the San Luis Obispo and Oakland departments led the charge. So far seven of the thieves have been arrested, and nine more have warrants out. In a press release, California Attorney General Xavier Becerra regarding the dangers of the criminal element: Organized retail thefts cost California business owners millions and expose them to copycat criminals. Ultimately, consumers pay the cost of this merchandise hijacking. We will continue our work with local law enforcement authorities to extinguish this mob mentality and prosecute these criminals to hold them accountable. You hear that, would-be copycats? You hear that, assembling mob? Xavier’s gonna give it to you… if you don’t fly straight and stop trying to stick ordinary consumers with the costs of your crimes. Not to mention California businesses. With Apple paying that $15 billion in back taxes, it doesn’t have a lot of cash to spare for these shenanigans. Well, I suppose it’s doing . I’ve asked Apple for comment on this case and whether they participated or cooperated in it. Perhaps Face ID helped.
How aerial lidar illuminated a Mayan megalopolis

How aerial lidar illuminated a Mayan megalopolis

3:37pm, 27th September, 2018
Archaeology may not be the most likely place to find the latest in technology — AI and robots are of dubious utility in the painstaking fieldwork involved — but has proven transformative. The latest accomplishment using laser-based imaging maps thousands of square kilometers of an ancient Mayan city once millions strong, but the researchers make it clear that there’s no technological substitute for experience and a good eye. The Pacunam Lidar Initiative began two years ago, bringing together a group of scholars and local authorities to undertake the largest yet survey of a protected and long-studied region in Guatemala. Some 2,144 square kilometers of the Maya Biosphere Reserve in Petén were scanned, inclusive of and around areas known to be settled, developed, or otherwise of importance. Preliminary imagery and data illustrating the success of the project were announced earlier this year, but the researchers have now performed their actual analyses on the data, and the resulting paper summarizing their wide-ranging results has been . The areas covered by the initiative, as you can see, spread over perhaps a fifth of the country. “We’ve never been able to see an ancient landscape at this scale all at once. We’ve never had a dataset like this. But in February really we hadn’t done any analysis, really, in a quantitative sense,” co-author Francisco Estrada-Belli, of Tulane University, told me. He worked on the project with numerous others, including his colleagues Marcello Canuto and Stephen Houston. “Basically we announced we had found a huge urban sprawl, that we had found agricultural features on a grand scale. After another 9 months of work we were able to quantify all that and to get some numerical confirmations for the impressions we’d gotten.” “It’s nice to be able to confirm all our claims,” he said. “They may have seemed exaggerated to some.” The lidar data was collected not by self-driving cars, which seem to be the only vehicles bearing lidar we ever hear about, nor even by drones, but by traditional airplane. That may sound cumbersome, but the distances and landscapes involved permitted nothing else. “A drone would never have worked — it could never have covered that area,” Estrada-Belli explained. “In our case it was actually a twin engine plane flown down from Texas.” The plane would made dozens of passes over a given area, a chosen “polygon” perhaps 30 kilometers long and 20 wide. Mounted underneath was “a Teledyne Optech Titan MultiWave multichannel, multi-spectral, narrow-pulse width lidar system,” which pretty much says it all: this is a heavy duty instrument, the size of a refrigerator. But you need that kind of system to pierce the canopy and image the underlying landscape. The many overlapping passes were then collated and calibrated into a single digital landscape of remarkable detail. “It identified features that I had walked over — a hundred of times!” he laughed. “Like a major causeway, I walked over it, but it was so subtle, and it was covered by huge vegetation, underbrush, trees, you know, jungle — I’m sure that in another 20 years I wouldn’t have noticed it.” But these structures don’t identify themselves. There’s no computer labeling system that looks at the 3D model and says, “this is a pyramid, this is a wall,” and so on. That’s a job that only archaeologists can do. “It actually begins with manipulating the surface data,” Estrada-Belli said. “We get these surface models of the natural landscape; each pixel in the image is basically the elevation. Then we do a series of filters to simulate light being projected on it from various angles to enhance the relief, and we combine these visualizations with transparencies and different ways of sharpening or enhancing them. After all this process, basically looking at the computer screen for a long time, then we can start digitizing it.” “The first step is to visually identify features. Of course, pyramids are easy, but there are subtler features that, even once you identify them, it’s hard to figure out what they are.” The lidar imagery revealed, for example, lots of low linear features that could be man-made or natural. It’s not always easy to tell the difference, but context and existing scholarship fill in the gaps. “Then we proceeded to digitize all these features… there were 61,000 structures, and everything had to be done manually,” Estrada-Belli said — in case you were wondering why it took nine months. “There’s really no automation because the digitizing has to be done based on experience. We looked into AI, and we hope that maybe in the near future we’ll be able to apply that, but for now an experienced archaeologist’s eye can discern the features better than a computer.” You can see the density of the annotations on the maps. It should be noted that many of these features had by this point been verified by field expeditions. By consulting existing maps and getting ground truth in person, they had made sure that these weren’t phantom structures or wishful thinking. “We’re confident that they’re all there,” he told me. [gallery ids="1721959,1721960,1721957,1721961,1721958"] “Next is the quantitative step,” he continued. “You measure the length and the areas and you put it all together, and you start analyzing them like you’d analyze other dataset: the structure density of some area, the size of urban sprawl or agricultural fields. Finally we even figured a way to quantify the potential production of agriculture.” This is the point where the imagery starts to go from point cloud to academic study. After all, it’s well known that the Maya had a large city in this area; it’s been intensely studied for decades. But the Pacunam (which stands for Patrimonio Cultural y Natural Maya) study was meant to advance beyond the traditional methods employed previously. “It’s a huge dataset. It’s a huge cross section of the Maya lowlands,” Estrada-Belli said. “Big data is the buzzword now, right? You truly can see things that you would never see if you only looked at one site at a time. We could never have put together these grand patterns without lidar.” “For example, in my area, I was able to map 47 square kilometers over the course of 15 years,” he said, slightly wistfully. “And in two weeks the lidar produced 308 square kilometers, to a level of detail that I could never match.” As a result the paper includes all kinds of new theories and conclusions, from population and economy estimates, to cultural and engineering knowledge, to the timing and nature of conflicts with neighbors. The resulting report doesn’t just advance the knowledge of Mayan culture and technology, but the science of archaeology itself. It’s iterative, of course, like everything else — Estrada-Belli noted that they were inspired by work done by colleagues in Belize and Cambodia; their contribution, however, exemplifies new approaches to handling large areas and large datasets. The more experiments and field work, the more established these methods will become, and the greater they will be accepted and replicated. Already they have proven themselves invaluable, and this study is perhaps the best example of lidar’s potential in the field. “We simply would not have seen these massive fortifications. Even on the ground, many of their details remain unclear. Lidar makes most human-made features clear, coherent, understandable,” explained co-author Stephen Houston (also from Tulane) in an email. “AI and pattern recognition may help to refine the detection of features, and drones may, we hope, bring down the cost of this technology.” “These technologies are important not only for discovery, but also for conservation,” pointed out co-author Thomas Garrison in an email. “3D scanning of monuments and artifacts provide detailed records and also allow for the creation of replicas via 3D printing.” Lidar imagery can also show the extent of looting, he wrote, and help cultural authorities provide against it by being aware of relics and sites before the looters are. The researchers are already planning a second, even larger set of flyovers, founded on the success of the first experiment. Perhaps by the time the initial physical work is done the trendier tools of the last few years will make themselves applicable. “I doubt the airplanes are going to get less expensive but the instruments will be more powerful,” Estrada-Belli suggested. “The other line is the development of artificial intelligence that can speed up the project; at least it can rule out areas, so we don’t waste any time, and we can zero in on the areas with the greatest potential.” He’s also excited by the idea of putting the data online so citizen archaeologists can help pore over it. “Maybe they don’t have the same experience we do, but like artificial intelligence they can certainly generate a lot of good data in a short time,” he said. But as his colleagues point out, even years in this line of work are necessarily preliminary. “We have to emphasize: it’s a first step, leading to innumerable ideas to test. Dozens of doctoral dissertations,” wrote Houston. “Yet there must always be excavation to look under the surface and to extract clear dates from the ruins.” “Like many disciplines in the social sciences and humanities, archaeology is embracing digital technologies. Lidar is just one example,” wrote Garrison. “At the same time, we need to be conscious of issues in digital archiving (particularly the problem of obsolete file formatting) and be sure to use technology as a complement to, and not a replacement for methods of documentation that have proven tried and true for over a century.” The researchers’ paper was published today in Science; you can learn about their conclusions (which are of more interest to the archaeologists and anthropologists among our readers) there, and follow other work being undertaken by the Fundación Pacunam .
Soviet camera company Zenit is reborn!

Soviet camera company Zenit is reborn!

1:39pm, 26th September, 2018
If you’re familiar with 20th century Soviet camera clones you’ll probably be familiar with . Created by Krasnogorsky Zavod, the Nikon/Leica clones were a fan favorite behind the Iron Curtain and, like the Lomo, was a beloved brand that just doesn’t get its due. The firm stopped making cameras in 2005 but in its long history it defined Eastern European photography for decades and introduced the rifle-like Photo Sniper camera looked like something out of James Bond. Now, thanks to a partnership with Zenit is back and is here to remind you that in Mother Russia, picture takes you. The camera is based on the Leica M Type 240 platform but has been modified to look and act like an old Zenit. It comes with a Zenitar 35 mm f/1.0 lens that is completely Russian-made. You can use it for bokeh and soft-focus effects without digital processing. The Leica M platform offers a 24MP full-frame CMOS sensor, a 3-inch LCD screen, HD video recording, live view focusing, a 0.68x viewfinder, ISO 6400, and 3fps continuous shooting. It will be available this year in the US, Europe, and Russia. How much does the privilege of returning to the past cost? An estimated $5,900-$7,000 if previous incarnations of the Leica M are any indication. I have a few old film Zenits lying around the house, however. I wonder I can stick in some digital guts and create the ultimate Franken-Zenit?
Watch this tiny robot crawl through a wet stomach

Watch this tiny robot crawl through a wet stomach

9:19am, 26th September, 2018
While this video shows a tiny robot from the City University of Hong Kong doing what amounts to a mitzvah, we can all imagine a future in which this little fellow could stab you in the kishkes. This wild little robot uses electromagnetic force to swim or flop back and forth to pull itself forward through harsh environments. Researchers can remotely control it from outside of the body. “Most animals have a leg-length to leg-gap ratio of 2:1 to 1:1. So we decided to create our robot using 1:1 proportion,” said Dr. Shen Yajing of CityU’s Department of Biomedical Engineering. The legs are .65 mm long and pointed, reducing friction. The robot is made of “silicon material called polydimethylsiloxane (PDMS) embedded with magnetic particles which enables it to be remotely controlled by applying electromagnetic force.” It can bend almost 90 degrees to climb over obstacles. The researchers have sent the little fellow through multiple rough environments including this wet model of a stomach. It can also carry medicines and drop them off as needed. “The rugged surface and changing texture of different tissues inside the human body make transportation challenging. Our multi-legged robot shows an impressive performance in various terrains and hence open wide applications for drug delivery inside the body,” said Professor Wang Zuankai. The team hopes to create a biodegradable robot in the future which would allow the little fellow to climb down your esophagus and into your guts and then, when it has dropped its payload, dissolve into nothingness or come out your tuchus.