Category Archives: Cloud Computing

Astronauts are going deep underground to prepare for space travel

Astronauts are spending weeks in underground caves in Italy. The conditions are thought to help prepare astronauts for the harsh reality of spaceflight.

Video courtesy of ESA

Follow Tech Insider: On Facebook

Join the conversation about this story »


Microsoft Office 2010
Polycom Cx5000 Unified Conference Station For Microsoft Lync (Amazon) Amazon Logo

$4300.00



Buy Now

Microsoft Surface (32gb) (Amazon) Amazon Logo

$539.95



Buy Now

+ 11 others available from Amazon
Microsoft Office Home & Business 2010 - 2pc/1user (one Desktop And One Portable) (disc Version) (Amazon) Amazon Logo

$278.00

Buy Now
Microsoft Software Office Home And Business 2010 English Pc Attach Key Product Key Card For 1pc (Amazon) Amazon Logo

$219.69

Buy Now
Microsoft Office Home & Student 2010 - 3pc/1user (disc Version) (Amazon) Amazon Logo

$179.99

Buy Now
Microsoft Office Home & Business 2010 Product Key Card- 1pc/1user [download] (Amazon) Amazon Logo

$150.09

Buy Now
Go! With Microsoft Office 2010, Vol. 1, And Student Videos (Amazon) Amazon Logo

$128.49



Buy Now

Microsoft Office Home & Student 2010 - 3pc/1user [download] (Amazon) Amazon Logo

$124.99

Buy Now
Microsoft Office Home & Student 2010 Product Key Card- 1pc/1user [download] (Amazon) Amazon Logo

$99.99

Buy Now
New Perspectives On Microsoft Office 2010, First Course (Amazon) Amazon Logo

$87.20



Buy Now

Microsoft Natural Ergonomic Desktop 7000 (Amazon) Amazon Logo

$81.78



Buy Now

Microsoft Office 2010: Introductory (shelly Cashman Series(r) Office 2010) (Amazon) Amazon Logo

$69.49



Buy Now

Technorati Tags: , , , , ,

How much it would cost to 3D print the Death Star and other real and fictional landmarks

British printing retailer TonerGiant decided to look into how much it would cost, and how long it would take, to recreate some famous landmarks using just a 3D printer.  Needless to say, this is no easy task. Here are some of their results, which may surprise you. 

Follow Tech Insider: On Facebook

Join the conversation about this story »


Microsoft Office 2010
Polycom Cx5000 Unified Conference Station For Microsoft Lync (Amazon) Amazon Logo

$4300.00



Buy Now

Microsoft Surface (32gb) (Amazon) Amazon Logo

$539.95



Buy Now

+ 11 others available from Amazon
Microsoft Office Home & Business 2010 - 2pc/1user (one Desktop And One Portable) (disc Version) (Amazon) Amazon Logo

$278.00

Buy Now
Microsoft Software Office Home And Business 2010 English Pc Attach Key Product Key Card For 1pc (Amazon) Amazon Logo

$219.69

Buy Now
Microsoft Office Home & Student 2010 - 3pc/1user (disc Version) (Amazon) Amazon Logo

$179.99

Buy Now
Microsoft Office Home & Business 2010 Product Key Card- 1pc/1user [download] (Amazon) Amazon Logo

$150.09

Buy Now
Go! With Microsoft Office 2010, Vol. 1, And Student Videos (Amazon) Amazon Logo

$128.49



Buy Now

Microsoft Office Home & Student 2010 - 3pc/1user [download] (Amazon) Amazon Logo

$124.99

Buy Now
Microsoft Office Home & Student 2010 Product Key Card- 1pc/1user [download] (Amazon) Amazon Logo

$99.99

Buy Now
New Perspectives On Microsoft Office 2010, First Course (Amazon) Amazon Logo

$87.20



Buy Now

Microsoft Natural Ergonomic Desktop 7000 (Amazon) Amazon Logo

$81.78



Buy Now

Microsoft Office 2010: Introductory (shelly Cashman Series(r) Office 2010) (Amazon) Amazon Logo

$69.49



Buy Now

Technorati Tags: , , , , ,

The police are using 'super-recognizer' detectives to identify suspects from grainy video footage

Surveillance cameras big brother

Other people’s faces get strangely seared into my brain, even those of complete strangers. It’s not that I necessarily want to remember them — I just can’t seem to help it.

It turns out Eliot Porritt, a detective sergeant with London’s Metropolitan Police, is looking for people like me.

Porritt leads a police task force called the Super Recognizer Unit. Officers in his unit are believed to have an uncanny ability to place a familiar face, a skill that some researchers estimate is present in roughly 1% of the population. Because they’re believed to be able to accurately identify people from grainy, poor-quality images and videos, these super-recognizers are being called in to help crack cases that have gone cold.

Psychologists who’ve researched the phenomenon say it’s a huge boon for law enforcement, and British police officers overseeing their work are thrilled by its apparent success. But lawyers and privacy advocates feel otherwise. To them, the idea of using people whose abilities have not yet been comprehensively studied to identify suspected criminals — and eventually put them behind bars — is worrisome and potentially dangerous.

Face blindness

In the 1990s, researchers identified a region of the brain that is thought to play a key role in our ability to identify a face. They named it the fusiform face area.

In studies of people who’ve experienced brain damage to that region, researchers have identified a condition known as prosopagnosia — a word that combines the Greek words “prosopon,” or face, with “agnos,” or lack of knowledge. Prosopagnosics have difficulties recognizing familiar faces — even, sometimes, their own.

Oliver SacksMore recently, researchers have diagnosed the condition in people without brain damage as well. This type of prosopagnosia is known as developmental prosopagnosia because its sufferers appear to be born with it. The deficit doesn’t appear to negatively affect other intellectual efforts in those people. Oliver Sacks, the renowned neurologist and prolific writer, for example, was a prosopagnosic, and he wrote about his condition in the book “The Mind’s Eye.”

“I am much better at recognizing my neighbors’ dogs (they have characteristic shapes and colors) than my neighbors themselves,” Sacks wrote.

Initially, researchers assumed that there were only two groups of people when it came to facial recognition: prosopagnosics, or people who were face-blind, and everyone else. They no longer think it’s quite that simple.

Super-recognition

The first paper to mention the phrase “super-recognizer” was published in 2009. In it, Harvard psychologists Ken Nakayama and Richard Russell and University College London cognitive neuroscientist Brad Duchaine outlined the experiences of four people who claimed to have an unusually good ability to recognize faces. In addition, the researchers presented the world’s first test designed to identify these so-called super-recognizers, the Cambridge Face Memory Test.

Eyeball

All four subjects in the paper described eerie instances in their past in which they had recognized apparent strangers: family members they hadn’t seen for decades or actors they’d glimpsed once in an ad and then seen again in a movie. Each person in the study said that for years they’d felt as if something were wrong with them. One of the participants, for example, told the researchers she tried to hide her ability and “pretend that I don’t remember … because it seems like I stalk them, or that they mean more to me than they do.”

For the first time, the Cambridge test suggested to these people that they weren’t alone — that their abilities weren’t merely in their head but quantifiable, testable, able to be proved and put down on paper.

Theory meets the London police

Around the same time Duchaine and his coauthors were discussing their newly published findings, psychologist Josh P. Davis, who is now a professor of psychology at the University of Greenwich, was traveling to a conference where he would meet the man in charge of video surveillance for the London Metropolitan Police, Detective Chief Inspector Mick Neville. That meeting would change how the London Police handled video and photo surveillance footage for at least the next five years.

Davis had spent the past few years studying the psychology of surveillance and was particularly interested in the way closed-circuit television, or CCTV, was used in court to identify criminals. Neville was there to give a presentation on a handful of remarkable officers in his force who had repeatedly made what he and other officers refer to as identifications, the successful matching of an image of a person with a name in a database. Upon hearing that, Davis knew he had to talk to him.

“I just went up to him and I said: ‘Look, I’m interested in doing research on this. Is there anything we could be doing for you? Because we have a lot of common interests,'” Davis recalled.

The two agreed on a path forward: They had to give the officers the Cambridge test.

What we know — and don’t know — about facial recognition

Research suggests that facial super-recognition is fundamentally different from traditional memory in several key ways. First, the ability doesn’t appear to be able to be learned or enhanced with training. Second, it appears to have a neurological and structural basis.

But there’s still a lot we don’t know about super-recognition — and about facial recognition more broadly.

In a recent study in the journal PLoS One, for example, researchers studied two so-called memory champions, people who had competed extensively in memory contests and had even been recognized by the Guinness World Book of Records for their memorization skills. When the researchers studied these people’s facial-recognition abilities, however, their results were merely average. In other words, the researchers concluded, something about facial processing was fundamentally different from memory — and it couldn’t be learned by any training or class. Instead, it seemed to be innate.

Face_recognitionAnd if people are born with their facial-recognition abilities, then they most likely have a neurological basis in the brain, researchers say. A super-recognizer, for example, might have a slightly larger fusiform face area than a face-blind person, or the person might show more activity in this area when looking at images of a face. “Any time there’s a psychological difference there has to be a neurological basis,” said Duchaine, the University College London cognitive neuroscientist. “Just like you’d say, OK, that car is faster than that other car. Is there a difference in their engines? Well yes of course there is.”

Still, Duchaine and other researchers lack the data to confirm this. All of the existing studies of super-recognizers are based on very small samples of people — anywhere from just two individuals to a half-dozen people. Several of the researchers have presented their hypotheses about super-recognizers at conferences and presentations, but many of these haven’t yet been published in peer-reviewed journals.

Even normal facial recognition has its limitations. People are generally bad at accurately recognizing the faces of people whose race is different from theirs, for example. This phenomenon, known as the cross-race effect, or CRE, has been replicated by dozens of international psychological studies. It is a problem for law enforcement in particular, especially when it comes to eyewitness testimony.

“The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification,” a team of psychologists wrote in a 2012 study.

Notably, some studies suggest the cross-race effect is reduced when someone has more contact with people of other races (i.e., white people who have regular contact with black people are better at accurately identifying black people than white people who have little or no contact with black people). While all of the police officers in London’s super-recognizer unit are white, all of them reported interacting frequently with people of races other than their own.

The London riots: the first large-scale super-recognizer test

On August 4, 2011, just months after Davis and Neville began testing London’s police officers for their super-recognition abilities, a young black man named Mark Duggan was fatally shot by members of the London Metropolitan Police, whose ranks are nearly 90% white, in sharp contrast to the larger London population. When the local police refused to disclose details about the circumstances of Duggan’s death, members of his family and the surrounding community held what has been described by witnesses, including the police, as a peaceful protest.

But the police did not acknowledge the protest.

“Where we probably didn’t handle it well is no one came and really communicated” with the protesters “or articulated any kind of message, so that process kind of grew in numbers,” said Porritt, who was working as an officer then.

Consistent accounts of what happened in the following days are still hard to come by, but rioting eventually broke out across the city.

Over the next six days, hundreds of businesses were virtually cleaned out. Homes and apartments were destroyed. A double-decker bus was set on fire. Five people died. “By Sunday night, I mean, it was absolute chaos,” Porritt recalled.

london riots burned out car women walk

A handful of sociological studies have tried to parse out the root causes and intervening factors that influenced the riots. One major theme emerges from all of them: Many of the people involved felt they were responding, in a way, to decades of unfair, racially discriminatory treatment by the police.

“Reading the Riots,” an extensive research project conducted after the riots by the London School of Economics and The Guardian, concluded “widespread anger and frustration at people’s everyday treatment at the hands of police was a significant factor.”

“You see the rioting yeah?” a 20-year-old male interviewee asked the researchers. “Everything the police have done to us, did to us, was in our heads. That’s what gave everyone their adrenaline to want to fight the police … It was because of the way they treated us.”

In addition, the evidence the researchers gathered suggests that those who participated in the riots generally had lower incomes than the UK population at large. “Analysis of more than 1,000 court records suggests 59% of the England rioters come from the most deprived 20% of areas in the UK,” the report said.

Indeed, as far as the looting was concerned, most of the goods that were stolen, according to the report, were electronics, followed by clothing, sportswear, and food.

The police wanted it to stop. And once it was over, they wanted to punish the people they saw as responsible.

But first they had to identify those people.

“It got serious from that point on,” Davis, the University of Greenwich psychologist, recalled, referring to the use of super-recognition in the days and months to come to identify suspects in the riots.

On August 12, 2011, Neville ordered a large trawl, or capture, of all of the video and images captured on London’s citywide mass-surveillance program of CCTV from the previous six days.

This was the first time that such footage — grainy, often barely distinguishable slices of chins, slivers of cheeks and eyes, or side profiles of faces — was used in such a systematic way. “Up until then, images really were being probably downloaded by detectives or police officers. And then they were just being probably hidden in a drawer or, if you’re lucky, pinned on a board,” Porritt told me. But Neville changed all that. By contracting with a private company called 3rd Forensic, he made it possible for the police officers to categorize hundreds of thousands of images and hours of surveillance video.

“So the big breakthrough that Mick Neville made was he brought in this database software,” Porritt said. “And because that started categorizing images it also enabled us to track cases.”

Using a system called Forensic Image Linking and Management, 3rd Forensic made it possible to store, label, search for, and retrieve images and videos of people captured not just on CCTV cameras across London but also on body-worn cameras, mobile phones, social media, and police booking rooms. These images are stored in a database that officers across the city can search.

“This systematic approach is much the same way as we search for fingerprints and DNA at the scenes of other crimes,” the Metropolitan Police says on its website. The difference here is that officers can search using a variety of terms including what they call personal descriptors, such as whether the person was wearing a hat or carrying a bag. Those descriptors could also include a person’s skin color.

shop a looter london riots poster identification police

In the days, months, and years after the riots, officers combed through thousands of photos and video clips from across the city. About 20 people in the force began to make identifications by matching familiar faces — people they’d seen elsewhere in the database or out in the field — with other faces in the database.

The vast majority of officers couldn’t do this. The low-quality images made it difficult to make out much in the first place, and many of the people in these photos were wearing bandanas or sunglasses. Yet these 20 officers had picked out and named more than 600 suspects, according to the BBC. These were either people they’d witnessed elsewhere whom they’d suspected of committing a crime or people they’d spotted previously on other potentially incriminating CCTV footage.

Many of these officers also ended up scoring highly on the Cambridge test, and some of them, like Porritt, are still working as super-recognizers with the London police. “So we could go back to the earliest images of 2011 and say we’ve just identified this guy for a burglary — how many more has he done that haven’t been solved?” Porritt said.

Since the super-recognizer task force got its official start on 11 May 2015, its officers have made roughly 2,300 identifications on cases that, until now, have been considered essentially unsolvable. The vast majority are for crimes like shoplifting and burglary.

In roughly 65.5% of those cases, the identified individual has been charged with a crime — this rate has fluctuated from 57% to 74% throughout the task force’s existence, according to Porritt. Typically, a suspect is charged on the basis of a combination of facial recognition and additional evidence linking the person to the crime. The London police department did not have data immediately available on how many of these charged suspects were found guilty, but the fact that so many of these cases have made it to trial alone suggests the courts are viewing testimony from super-recognizers as admissible evidence. And, in what are called “linked series” — cases where a suspect is charged with anywhere from 20 to 30 crimes at once based on collected CCTV footage — 100% of the suspects have pleaded guilty, according to Porritt.

In addition, several studies of the super-recognizers’ abilities, including a paper published this August in the journal Applied Cognitive Psychology, support the idea that the super-recognizers are making legitimate identifications.

riot police london riots

Some privacy and police-accountability advocates think the practice is getting ahead of the science, though. Instead of preventing crime, Camilla Graham Wood, a legal officer with UK-based privacy-rights organization Privacy International, said its use may be “combining the most worrying aspects” of facial-recognition technology “along with the subjective decisions, and errors therein, of human beings.”

“What we don’t know is … how good are they, how many mistakes do they make, what role does prejudice play in all of this?” Graham Wood added.

An ‘unknown field’

For years after the London riots ended, officers combed through thousands of images and hours of surveillance footage in every area where riots or looting had been documented. And they identified hundreds of suspects. But if they had surveilled another area in another borough, might they also have found numerous suspects who wouldn’t otherwise be identified? Did the riots — and the discovery of police officers with super-recognition abilities — justify the police’s decision to pay particular attention to these areas?

Porritt and his coworkers believe that using super-recognizers is massively improving the efficiency, speed, and accuracy of their work. And at every step of the process, he and his team have had psychologists and cognitive neuroscientists at their side, cheering on the efforts of the world’s first super-recognizer task force.

Super-recognition still does not have a scientific definition. Yet the Metropolitan Police have used it to pick out, arrest, and successfully charge thousands of individuals who otherwise would most likely never have been brought to court. Out of a force of 36,000 individuals, Porritt’s team of just five people has been able to make something approaching 25% of all the identifications from images and video in the entire city. “That for me is exceptional value for money,” Porritt said.

But it’s still early days for super-recognition as a science. “We’re working in a kind of unknown field with no real protocol,” Porritt told me. As a result, it’s impossible to say whether super-recognition is being applied in a way that reinforces existing, potentially discriminatory policies, or whether it’s being used to combat those policies through increased accuracy and objectivity.

Regardless, the use of super-recognizing officers does appear to lend increased legitimacy to the use of surveillance, but some question whether it will be applied fairly.

“We’re meant to have a culture of ‘policing by consent,’ but with these kinds of measures there is no consent,” Graham Wood said. “It enables perpetual policing, whether or not we’ve actually committed any crime.”

SEE ALSO: There’s a test that tells you if you’re a ‘super-recognizer’ of faces, and you can take it right now

Join the conversation about this story »

NOW WATCH: Neuroscientists are trying to understand how the brains of elite athletes work

Technorati Tags: , , , , ,

Amazon’s Echo is building a coffin that’s custom-made for Google (AMZN, GOOG, GOOGL)

amazon alexa lg refrigerator smart fridge

Amazon’s Alexa, the personal assistant that launched with the Amazon Echo smart speaker, completely dominated this year’s Consumer Electronics Show. 

Just ask anyone: “Alexa Just Conquered CES. The World Is Next,” read one Wired headline. CNBC, the BBC, MIT Technology Review, and many others all had equally laudatory reports. Companies like Ford, Huawei, LG, as well as a long parade of startups, all unveiled home appliances, phones, cars, and more gadgets with Alexa integration.

It’s a reflection of the sheer power that Amazon is starting to wield in the nascent smart home market, as a growing number of people come to rely on their Echo devices to run their homes and to automate their lives. The market for the Echo is still small compared to smartphones, but it’s growing fast.

Google already has the Google Home, its own voice-enabled speaker, designed to compete with the Amazon Echo. Microsoft partnered with Harmon to bring its Cortana virtual assistant to a smart speaker. Even Apple is rumored to be working on a dedicated Siri speaker.

But so far Amazon is the smart speaker to beat, with an early start and plenty of buzz.

Amazon Echo

More importantly though, the rise of the Echo heralds a changing tech landscape that could spell big trouble for Google. No matter how many Google Home devices the search giant sells, Google will be playing on a field that’s tilted in Amazon’s favor. 

The rise of Alexa

A big part of Amazon’s early success with Alexa is due to the fact that the company didn’t oversell it. After years of iPhone users getting let down by Siri, the first truly mainstream voice agent, Amazon billed the Echo as a speaker that, by the way, has a few smart voice commands built in.

Then, just as people got accustomed to the idea of talking to Alexa, and positive word of mouth spread, Amazon added more capabilities. Alexa now boasts thousands of “skills” that allow it to connect with apps like Uber, Twitter, and Bloomberg news. 

That’s helped Alexa and the Echo speaker earn a position as the central hub in so-called smart homes. Alexa’s voice-first interface is the perfect way to manage internet-connected lights, door locks, and thermostats — it’s way more intuitive than having to pull out a tablet or phone every two seconds.

Echo

But here’s the crucial part. The Echo also makes it super-easy to buy stuff, specifically stuff from Amazon.

Alexa can play music from streaming services like Spotify, but it defaults to using Amazon’s own Prime Music, which is a pretty key feature for a smart speaker. It’s yet another reason for consumers to get a $99/year Amazon Prime subscription…which also gets you free shipping from Amazon, which encourages you to buy more from Amazon. 

In other words, whatever else it does, the Amazon Echo is designed to make it easier for you to give more of your money to Amazon. And that slick voice interface for “skills” and for controlling all of your smart home gear ensures that you’re always using the Echo and Alexa. 

It’s pretty genius, in a diabolical way.

Versus Google

This is where things get bad for Google. 

The more Alexa devices that Amazon and its partners sell, the better Amazon does at its core retail business. Every Echo is a customer who is more likely to spend more on books, groceries, music, and movies. 

Consider Google’s position, though. It can sell as many Google Home devices as it wants. And it’s true that Google is better at search than Amazon, by a country mile. But Google is a search advertising company, not a retail company, and those Google Home devices aren’t delivering ads. 

(Can you imagine if they did? “OK Google, open the garage door.” “Okay, Matt, but listen to this ad for Mailchimp first.”)

Google Home

Sure Google can use all the data it collects through its Google Home speaker to refine the ads people see on its search engine. But the point is that consumers will be spending less time in front of screens and looking at Google’s search ads. No matter how good Google’s search ads are, it doesn’t matter if people aren’t seeing them.

In fact, we’re already seeing some of this: Amazon is beating Google in the vital area of e-commerce search, because Amazon is the go-to destination for buying things. And thanks to the popularity of Amazon’s Alexa, that’s likely to continue.

Voice technology is still in its very early stages, and smartphones aren’t going away anytime soon.

And none of this is to say that Google’s problems are insurmountable. It wasn’t so long ago that investors were convinced that Facebook couldn’t monetize its mobile app. Today, Facebook is one of the primary mobile ad platforms. Google could certainly pull off a similar coup. 

But the direction in which computing is moving is clear, and as things stand now, Google’s weakness looks like Amazon’s strength. And with Alexa on the rise, the clock is ticking for Google.

SEE ALSO: Mark Zuckerberg’s ambitious 10-year plan could literally change our reality — and that’s scary

Join the conversation about this story »

NOW WATCH: 5 things Google’s Pixel phone can do that the iPhone can’t

Technorati Tags: , , , , ,

You should only buy iPhone cables and Apple accessories that have this sticker on the box — here’s why

It’s a tough choice to ignore, you can buy a $3 iPhone charger from your local gas station or fork over $19 for Apple’s own lightning cable. And while it seems like a no-brainer at first, a recent study revealed that knockoff chargers have been found to fail basic safety tests 99% of the time and have even lead to fires and electrocutions.

Here’s why you should always look for Apple’s MFi symbol whenever buying a cable for your iPhone.

Follow Tech Insider: On Facebook

 

Join the conversation about this story »

Technorati Tags: , , , , ,

Self-driving cars could spark a cycling revolution

cyclist dubai bike biker cycling

LONDON — David Wynter knows the risks cyclists face on the roads as well as anyone.

The first time he was hit by a vehicle was in May 2014 in East London, by a van making a U-turn without checking its mirrors. “Took me five weeks to fully recover,” he said.

The second time was a little over a year later. Wynter, who is CEO of London data management startup Yambina, collided with a woman turning right into his lane. “Sprained my left wrist, needed a brace on it, a few cuts and deep bruises including my cheek where my cycling glasses cut into it.”

Most people who cycle regularly in cities have a story (or two) like this — or at least know someone who does.

The increased popularity of cycling, heightened driver awareness, and bike lane initiatives are helping to improve hazardous conditions. However, it doesn’t get around the fact that many urban areas are just hopelessly designed for cyclists and drivers to share the road.

But some cyclists, technologists, and automobile manufacturers are starting to eagerly look at a surprising solution: Self-driving cars.

Some cyclists think a golden age is right around the corner

Self-driving cars, long a dream, are finally becoming a reality. Everyone from Google to Audi, from Uber to Volkswagen, are heavily investing in and developing the technology. The goal is to create totally autonomous vehicles, capable of driving in real-world conditions without any human input.

Right now, 94% of all car accidents in the US are due to human error, according to Google. Roads are dangerous, and doubly so for cyclists, who don’t have metal casing to protect them.

But self-driving tech — in theory — has significant advantages over any human driver. It won’t tire. It won’t get bored. It won’t be tempted to break the rules of the road. It will be able to look in every direction simultaneously. And crucially, it will have super-human reaction speeds.

Together, it all adds up to potential massive improvement in safety — the kind that historically hadn’t been possible without major urban redevelopment. By some estimates, self-driving cars could save 300,000 lives a decade in the United States alone.

google self driving car mountain view cyclist biker

Dr. Miklós Kiss, head of predevelopment piloted systems at Audi, thinks self-driving cars could be a boon to cyclists. It will “make it easier for cyclists because the behavior of automated cars will be more predictable than now,” he told Business Insider.

Many cyclists are equally enthusiastic. “If self-driving cars are proven safer for cyclists and pedestrians, cyclists would lay out the red carpet and welcome the revolution with both arms,” Andreas Kambanis, founder of biking site LondonCyclist.co.uk, told Business Insider.

cyclists biking velodrome laura trott“Just under 50% of cyclist deaths on London’s roads are caused by HGVs, so if the technology extended there, we’d immediately eliminate a huge danger.”

He added: “Beyond the safety aspect, self driving cars may also be more of a pleasure to drive around, as you wouldn’t expect it to do something erratic or to drive aggressively. This may in turn mean more cyclists on the road as the roads will now feel safer.”

Eli Allalouf, a director at Alyo International, used to bike everywhere, he said. “But after my second bicycle were stolen and too many injuries cycling on the road I have given up … I would love to have the ability to ride my bicycle every day to work but the risk is too high as I am a family man with kids and wife … After a certain period of the fully automated cars I think there will be a huge spike in cycling.”

Accounting for cyclists isn’t easy, say self-driving car companies

However, engineers working on the technology say that learning to deal with cyclists is throwing up unique challenges. They’re small, fast in urban environments, and nimble — but also relatively slow on open roads, and immensely vulnerable.

“Cyclists are more dynamic than cars. The biggest challenge is to predict their future behavior and driving route. Cyclists can be found on the road and on sidewalks too. Compared to cars they are not limited to only one road space. Sometimes cyclists do not obey traffic rules completely (red light-violators, etc.),” Audi’s Dr. Miklós Kiss said.

“Cyclists need their own behavior prediction model as they behave differently to car drivers and pedestrians.”

Karl Iagnemma, CEO of autonomous tech startup Nutonomy, has encountered similar problems. “All autonomous vehicles under development today are being designed to detect and avoid cyclists. This requires that the sensing systems be specifically ‘trained’ to detect cyclists, and that the navigation systems be instructed how to maneuver in the presence of cyclists.”

Renault-Nissan CEO Carlos Ghosn takes a particularly dim view, telling CNBC in January 2016 that “one of the biggest problems [for self-driving cars] is people with bicycles.”

google self driving car cyclist biker hand signal“They don’t respect any rules usually,” he claimed. “The car is confused by them, because from time-to-time they behave like pedestrians and from time-to-time they behave like cars.”

Google, one of the most high-profile developers of the tech, isn’t trash-talking cyclists — but recognises the difficulties they can pose. In its June 2016 progress report, Google’s self-driving car team explained how its vehicles treat cyclists differently (and “conservatively”) to other road users.

“For example, when our sensors detect a parallel-parked car with an open door near a cyclist, our car is programmed to slow down or nudge over to give the rider enough space to move towards the center of the lane and avoid the door,” the Google team wrote.

“We also aim to give cyclists ample buffer room when we pass, and our cars won’t squeeze by when cyclists take the center of the lane, even if there’s technically enough space. Whether the road is too narrow or they’re making a turn, we respect this indication that cyclists want to claim their lane.”

And because cyclists don’t have indicator lights, the technology has to predict cyclists’ intentions another way: By reading their hand gestures using its in-built cameras.

uber self-driving autonomous car vehicle biker cyclist biking

Uber’s self-driving car trials in December 2016 clearly illustrated the dangers the tech can pose to cyclists if not properly implemented. Its vehicles were performing a “right hook” turn that put cyclists at serious risk — with the San Francisco Bike Coalition calling it “one of the primary causes of collisions between cars and people who bike resulting in serious injury or fatality.”

Uber opted to keep its vehicles in circulation, to the alarm of cycling advocates, and had human drivers make the turn manually instead. Thankfully there were no reported injuries, and the trial was subsequently ended after the California DMV revoked the registrations of the vehicles because they didn’t have a license for the tests.

Self-driving cars aren’t a reason to stop supporting cyclists

cycling bicycle die-in protest london bikeThese technological challenges — while tricky — are all theoretically surmountable. Companies like Renault-Nissan say they want autonomous vehicles in commercial production by 2020.

With over 50,000 cyclists injured in road accidents in America in 2014 (and another 21,000 in the UK), that date can’t come quickly enough.

“There will a point where the number of accidents involving cars and cyclists will improve,” David Wynter said. “Only then will the public feel it is safer to commute in the cities.”

However, Andreas Kambanis, from LondonCyclist.co.uk, cautions that self-driving vehicles will not be a panacea for everything currently wrong with cycling in cities — and must not be used as an excuse to slack on supporting cyclists in other ways.. “There is one big caveat to all of this. London’s roads are already heavily congested and polluted, more cars isn’t going to solve this problem, so the city must continue to invest in infrastructure that considers cyclists.”

Join the conversation about this story »

NOW WATCH: Airplane designers have a brilliant idea for the middle seat

Technorati Tags: , , , , ,

Bitcoin plunged again

Bitcoin was still falling against the dollar on Saturday after losing more than a fifth of its value in trade Thursday.

Bitcoin was down more than 7% to $832 at 9:21 a.m. GMT (4:21 a.m. ET), according to Markets Insider. That’s down from over $1,100 earlier in the week. It has since recovered to $877.

Here’s the chart:

BTC

Bitcoin surged in the weeks running up to Christmas, battled volatility in the run-up to New Year and first few days of 2017, before tanking Thursday.

Some commentators have suggested that the rise was caused by Chinese investors looking to move their money out of China ahead of a rumoured further devaluation of the renminbi.

This theory is backed up by moves in the renminbi Thursday that coincided with bitcoin’s plunge. The renminbi rallied 2.6% against the dollar, posting its biggest two-day gains ever.

China’s central bank on Friday also warned investors to exercise caution when investing in virtual currencies such as bitcoin and met with the representatives of a major bitcoin trading platform in China, BTCC, according to Reuters. That appears to have shaken confidence further.

Join the conversation about this story »

NOW WATCH: Here’s the massive gap in average income between the top 1% and the bottom 99% in every state

Technorati Tags: , , , , ,

The hottest products from CES 2017

Get your gadget on!

Hottest products from CES 2017

Image by IDG / Network World

It’s time once again for the International CES, the world’s largest consumer electronics trade show. Thousands of people flock to Las Vegas to see the latest gizmos, gadgets, TVs, computers, smartphones, robots, and other devices meant to make our lives easier. Here’s a sneak peek at some of the products on display at the show.

To read this article in full or to leave a comment, please click here

Technorati Tags: , , , , ,

I switched from Android to iPhone 7 Plus — and I regret it wholeheartedly

apple iPhone 7 bokeh photo

I can pinpoint the exact moment I realised buying an iPhone 7 Plus was a mistake: When I tried to interview a London venture capitalist over the phone.

Apple has rigged the iPhone so that you cannot record a phone call. It is not illegal to tape phone calls, especially if you disclose to the other person that your call is being recorded. And phones have had call-recording functions on them for more than a decade. But for some reason Apple just does not allow call-taping to happen on the iPhone.

OK, I thought. Not a problem. I will simply download an app to record the call for me.

Nope!

Turns out that because of Apple’s ban, call-taping apps only work if you merge your call with a second call to the app, which the app then records. Clunky but not impossible.

But … no!

I discovered that my wireless carrier doesn’t support merged calling on my iPhone calling plan, so I couldn’t even do that. 

OK, I thought. I’ll just put the call on speaker, and use Apple’s Voice Memos app to record the sound.

Thrice no!

The iPhone is rigged to prevent that, too.

So I solved the problem by putting my iPhone 7 Plus on speaker, and putting my old Android phone next to it, which taped the call through Voice Recorder by Appliqato.

My old Samsung Note 5 recorded the conversation wonderfully and shared it instantly into Dropbox for me. In fact, had I taken the call on the Note 5, I could have used Automatic Call Recorder by Appliqato to do the exact same job in a single step. 

But neither of these apps, and none of these functions, are available on iPhone.

As someone who relies on their phone for work, someone who uses it 18 hours a day, a smartphone is the most important device in my entire life. I need my phone to do everything for me. The Note 5 was the first phone that did all that. But I dropped it, putting a big crack in the back of its beautiful ivory gold case. 

You have probably guessed why I bought an iPhone: I was one of the people eagerly looking forward to the Note 7, until it started blowing up in people’s faces.

So when the Note 7 was recalled on the day I placed my order, I bought an iPhone 7 Plus instead.

There is a lot to like on iPhone but …

I wanted a big-screen phablet that had a great camera and fantastic battery life. That narrowed my choice down to the Samsung Galaxy S7, the Google Pixel, and the iPhone 7 Plus. I was in the mood for a change, so that ruled out the Samsung. And I believe that people who write about tech should use both Apple and Android devices, especially as about 80% of smartphones are Androids. (Tech bloggers are 90% iPhone users, which is why their coverage of Android is often biased or riddled with errors.)

My last Apple phone was the iPhone 5, which I liked a lot. The camera on that phone was a huge step up. I knew that Apple would have a good battery and a good camera, so I thought what the heck — I’ll go back to iPhone.

There is a lot to like on iPhone 7 Plus.

The battery life is fantastic — it’s actually disappointing if I go below 50% by the end of the day. If battery life is one of your main worries then the iPhone 7 Plus is a good choice for you.

In fact, there is a shopping list of little things that just seem to perform better on iPhone than Android, such as wifi capture, Twitter, and Gmail (ironically, given that it’s made by Android parent Google).

Oddly, Instagram is a worse experience, probably something to do with Apple’s historic disinterest in fully integrated sharing tools in iOS.

iPhone no longer has the best camera

The most surprising thing, however, is the camera: It’s just not as good as Samsung’s Galaxy cameras, even with the fancy new dual-lens system. Samsung has been getting great reviews for its cameras for a while now, and I think I know why: The company seems to have realised that most photos are taken in low-light conditions: Nighttime, indoors, and the winter/autumn months. Only a minority of pictures are ever taken in full sun or the natural light of summer. Thus, Samsung appears to have optimised its cameras for the darkness, extracting decent pictures from even the most gloomy source material. iPhone, by contrast, produces much more muddy images, as these side-by-side photos from my colleague Rafi Letzter show:

samsung iphone

It’s not that the iPhone’s camera is bad. It’s just that once you’ve used a Samsung going to Apple feels like a step down — especially given the price you’re paying.

The little things

There are a few other things that make the iPhone 7 Plus feel like it’s a yard behind the competition, too:

  • There is still no dedicated “back” button on iPhone, even though Apple has gone to lengths to add a “go back” function into virtually everything. Sometimes you get an arrow, sometimes a little label showing the last app you were in, sometimes nothing. Mostly it’s fine. But a proper back button makes everything simpler and neater.
  • I really missed Android’s “menu” button, too. Not all apps are intuitively designed, but in Android there is a rescue button for dummies — hit “menu” and you get a list of all the things you can do on the immediate screen you’re looking at. Not on iPhone. 
  • The Control Center lets you turn wifi on and off but not mobile data. That feels like an oversight. 
  • And not having a full keyboard, including the numbers, creates extra steps when you type that don’t exist on Android.

But the place where iPhone really is a generation behind Android is the notification screen. iPhone has come a long way on this, and the level and functionality of the notifications and the status bar are so much better than they used to be. But on Android you can basically control the entire phone from the notifications screen. And you can customise it to your heart’s content.

iOS offers only a basic version of all that. Once you’re used to Android’s system navigation speed it feels crippling to tap — and tap, and tap again — to get where you need to be in iOS.

So, bottom line: My iPhone 7 Plus is very good. But it was not worth the £938/$1,159 (including Applecare) that I paid for it. Which is why I’ll be looking very carefully at the new Androids coming from Google and Samsung later this year.

Join the conversation about this story »

NOW WATCH: ‘We watched our kids cry on Christmas’: Parents are furious after disaster with the holidays’ hottest toy

Technorati Tags: , , , , ,

THE CHATBOT MONETIZATION REPORT: Sizing the market, key strategies, and how to navigate the chatbot opportunity (FB, AAPL, GOOG)

bii chatbots_users

Improving artificial intelligence (AI) technology and the proliferation of messaging apps — which enable users and businesses to interact through a variety of mediums, including text, voice, image, video, and file sharing — are fueling the popularity of chatbots.

These software programs use messaging as an interface through which to carry out various tasks, like checking the weather or scheduling a meeting. Bots are still nascent and monetization models have yet to be established for the tech, but there are a number of existing strategies — like “as-a-service” or affiliate marketing — that will likely prove successful for bots used as a tool within messaging apps.

Chatbots can also provide brands with value adds — services that don’t directly generate revenue, but help increase the ability of brands and businesses to better target and serve customers, and increase productivity. These include bots used for research, lead generation, and customer service.

A new report from BI Intelligence investigates how brands can monetize their chatbots by tailoring existing models. It also explores various ways chatbots can be used to cut businesses’ operational costs. And finally, it highlights the slew of barriers that brands need to overcome in order to tap into the potentially lucrative market. 

Here are some of the key takeaways: Screen Shot 2016 11 22 at 5.26.40 pm

  • Chatbot adoption has already taken off in the US with more than half of US users between the ages of 18 and 55 having used them, according to exclusive BI Intelligence survey data.
  • Chatbots boast a number of distinct features that make them a perfect vehicle for brands to reach consumers. These include a global presence, high retention rates, and an ability to appeal to a younger demographic.
  • Businesses and brands are looking to capitalize on the potential to monetize the software. BI Intelligence identifies four existing models that can be successfully tailored for chatbots. These models include Bots-as-a-Service, native content, affiliate marketing, and retail sales.
  • Chatbots can also provide brands with value adds, or services that don’t directly generate revenue. Bots used for research, lead generation, and customer service can cut down on companies’ operational costs.
  • There are several benchmarks chatbots must reach, and barriers they must overcome, before becoming successful revenue generators. 

In full, the report:

  • Explains the different ways businesses can access, utilize, and distribute content via chatbots.
  • Breaks down the pros and cons of each chatbot monetization model.
  • Identifies the additional value chatbots can provide businesses outside of direct monetization.
  • Looks at the potential barriers that could limit the growth, adoption, and use of chatbots and therefore their earning potential.

Interested in getting the full report? Here are several ways to access it:

  1. Subscribe to an All-Access pass to BI Intelligence and gain immediate access to this report and over 100 other expertly researched reports. As an added bonus, you’ll also gain access to all future reports and daily newsletters to ensure you stay ahead of the curve and benefit personally and professionally. >> Learn More Now
  2. Purchase & download the full report from our research store. >> Purchase & Download Now

Join the conversation about this story »

Technorati Tags: , , , , ,