$3 Billion Google-Backed AI Unicorn UiPath Set to Achieve Revenue Growth of 5614%

From CCN.com: UiPath, an artificial intelligence (AI) startup first backed by Google’s CapitalG fund in 2018 will shortly, according to leaked reports, achieve over 5000% growth.

Business Insider sources claim UiPath, also backed by Sequoia Capital and Accel, is set to hit $200 million annual recurring revenue (ARR). ARR is a metric used by software-as-a-service (SaaS) providers to reflect subscription revenues.

The ARR figure for UiPath was just $3.5 million in 2016. Its ARR hit $150 million in November 2018. When it reaches its predicted $200 million ARR in coming weeks UiPath will have grown by 33% in less than three months.

UiPath is a robotic process automation (RPA) specialist providing “software robots” to enterprises in the form of its SaaS platform. The software automates back-office business tasks using AI.

By early 2018 the startup, focused on supplying Fortune 500 firms, had 700 enterprise customers including BMW Group and Huawei.  This had grown from just 100 the year before. At the time catapulting UiPath’s ARR by 690% and making it a market leader in the RPA market. CEO Daniel Dines says:

“Our idea is to have a robot for every employee, working side by side on the same computer in assisted automation. We see this as a compelling proposition to many of our customers, having both back office and front office.”

“First” Romanian Unicorn Startup is Now Worth Over $3 Billion

UiPath is dubbed the “first” Romanian unicorn. The AI company received its status of a startup valued at over $1 billion by its March 2018 round of funding. This round was led by Accel and joined by Google’s CapitalG.

CapitalG then went on to lead a September 2018 funding round. By this point, UiPath had a $418 million influx of capital for the year and an over $3 billion valuation.

Formed in 2005, UiPath retains its development team in Bucharest, Romania and has offices globally including in New York and Japan.

UiPath isn’t the fastest company to gain “unicorn” prestige. Still, UiPath’s rapid evolution proves artificial intelligence is a good bet technologically and strategically for investors. For Google parent Alphabet, AI is less of a choice and more of a necessity.

Laela Sturdy, a partner at Alphabet’s CapitalG says:

“We strongly believe that RPA is a primary route for organizations to benefit from AI.”

The company was ranked 26th on Deloitte’s Technology Fast 500 last year and entered the Forbes 2018 Cloud providers listing in 14th position in September. By this time UiPath was boasting 1750 enterprise and government customers.

Reporting also indicates UiPath’s clients now include the armed forces and the Internal Revenue Service (IRS). UiPath is still a private company with no signs yet of plans to go publicly listed.

Gartner predicts the value of the AI market in 2018 to reach $1.2 trillion. Of emerging technologies, AI will be one of the first to see widespread use, outpacing blockchain and even IoT. For FAANG companies like Google, exploring AI directly and indirectly is common-sense if they are to retain their digital market domination.


Source
Author: Melanie Kramer 
Image Credit

Tech Breakthroughs of 2018

Development across the entire information technology landscape certainly didn’t slow down this year. From CRISPR babies, to the rapid decline of the crypto markets, to a new robot on Mars, and discovery of subatomic particles that could change modern physics as we know it, there was no shortage of headline-grabbing breakthroughs and discoveries.

As 2018 comes to a close, we can pause and reflect on some of the biggest technology breakthroughs and scientific discoveries that occurred this year.

I reached out to a few Singularity University speakers and faculty across the various technology domains we cover asking what they thought the biggest breakthrough was in their area of expertise. The question posed was:

“What, in your opinion, was the biggest development in your area of focus this year? Or, what was the breakthrough you were most surprised by in 2018?”

 I can share that for me, hands down, the most surprising development I came across in 2018 was learning that a publicly-traded company that was briefly valued at over $1 billion, and has over 12,000 employees and contractors spread around the world, has no physical office space and the entire business is run and operated from inside an online virtual world. This is Ready Player One stuff happening now.

For the rest, here’s what our experts had to say.

DIGITAL BIOLOGY

Dr. Tiffany Vora | Faculty Director and Vice Chair, Digital Biology and Medicine, Singularity University

“That’s easy: CRISPR babies. I knew it was technically possible, and I’ve spent two years predicting it would happen first in China. I knew it was just a matter of time but I failed to predict the lack of oversight, the dubious consent process, the paucity of publicly-available data, and the targeting of a disease that we already know how to prevent and treat and that the children were at low risk of anyway.

I’m not convinced that this counts as a technical breakthrough, since one of the girls probably isn’t immune to HIV, but it sure was a surprise.”

QUANTUM COMPUTING

Andrew Fursman | Co-Founder/CEO 1Qbit, Faculty, Quantum Computing, Singularity University

“There were two last-minute holiday season surprise quantum computing funding and technology breakthroughs:

First, right before the government shutdown, one priority legislative accomplishment will provide $1.2 billion in quantum computing research over the next five years. Second, there’s the rise of ions as a truly viable, scalable quantum computing architecture.”

*Read this Gizmodo profile on an exciting startup in the space to learn more about this type of quantum computing

ENERGY

Ramez Naam | Chair, Energy and Environmental Systems, Singularity University

“2018 had plenty of energy surprises. In solar, we saw unsubsidized prices in the sunny parts of the world at just over two cents per kwh, or less than half the price of new coal or gas electricity. In the US southwest and Texas, new solar is also now cheaper than new coal or gas. But even more shockingly, in Germany, which is one of the least sunny countries on earth (it gets less sunlight than Canada) the average bid for new solar in a 2018 auction was less than 5 US cents per kwh. That’s as cheap as new natural gas in the US, and far cheaper than coal, gas, or any other new electricity source in most of Europe.

In fact, it’s now cheaper in some parts of the world to build new solar or wind than to run existing coal plants. Think tank Carbon Tracker calculates that, over the next 10 years, it will become cheaper to build new wind or solar than to operate coal power in most of the world, including specifically the US, most of Europe, and—most importantly—India and the world’s dominant burner of coal, China.

GLOBAL GRAND CHALLENGES

Darlene Damm | Vice Chair, Faculty, Global Grand Challenges, Singularity University

“In 2018 we saw a lot of areas in the Global Grand Challenges move forward—advancements in robotic farming technology and cultured meat, low-cost 3D printed housing, more sophisticated types of online education expanding to every corner of the world, and governments creating new policies to deal with the ethics of the digital world. These were the areas we were watching and had predicted there would be change.

What most surprised me was to see young people, especially teenagers, start to harness technology in powerful ways and use it as a platform to make their voices heard and drive meaningful change in the world. In 2018 we saw teenagers speak out on a number of issues related to their well-being and launch digital movements around issues such as gun and school safety, global warming and environmental issues. We often talk about the harm technology can cause to young people, but on the flip side, it can be a very powerful tool for youth to start changing the world today and something I hope we see more of in the future.”

BUSINESS STRATEGY

Pascal Finette | Chair, Entrepreneurship and Open Innovation, Singularity University

“Without a doubt the rapid and massive adoption of AI, specifically deep learning, across industries, sectors, and organizations. What was a curiosity for most companies at the beginning of the year has quickly made its way into the boardroom and leadership meetings, and all the way down into the innovation and IT department’s agenda. You are hard-pressed to find a mid- to large-sized company today that is not experimenting or implementing AI in various aspects of its business.

On the slightly snarkier side of answering this question: The very rapid decline in interest in blockchain (and cryptocurrencies). The blockchain party was short, ferocious, and ended earlier than most would have anticipated, with a huge hangover for some. The good news—with the hot air dissipated, we can now focus on exploring the unique use cases where blockchain does indeed offer real advantages over centralized approaches.”

*Author note: snark is welcome and appreciated

ROBOTICS

Hod Lipson | Director, Creative Machines Lab, Columbia University

“The biggest surprise for me this year in robotics was learning dexterity. For decades, roboticists have been trying to understand and imitate dexterous manipulation. We humans seem to be able to manipulate objects with our fingers with incredible ease—imagine sifting through a bunch of keys in the dark, or tossing and catching a cube. And while there has been much progress in machine perception, dexterous manipulation remained elusive.

There seemed to be something almost magical in how we humans can physically manipulate the physical world around us. Decades of research in grasping and manipulation, and millions of dollars spent on robot-hand hardware development, has brought us little progress. But in late 2018, the Berkley OpenAI group demonstrated that this hurdle may finally succumb to machine learning as well. Given 200 years worth of practice, machines learned to manipulate a physical object with amazing fluidity. This might be the beginning of a new age for dexterous robotics.”

MACHINE LEARNING

Jeremy Howard | Founding Researcher, fast.ai, Founder/CEO, Enlitic, Faculty Data Science, Singularity University

“The biggest development in machine learning this year has been the development of effective natural language processing (NLP).

The New York Times published an article last month titled “Finally, a Machine That Can Finish Your Sentence,” which argued that NLP neural networks have reached a significant milestone in capability and speed of development. The “finishing your sentence” capability mentioned in the title refers to a type of neural network called a “language model,” which is literally a model that learns how to finish your sentences.

Earlier this year, two systems (one, called ELMO, is from the Allen Institute for AI, and the other, called ULMFiT, was developed by me and Sebastian Ruder) showed that such a model could be fine-tuned to dramatically improve the state-of-the-art in nearly every NLP task that researchers study. This work was further developed by OpenAI, which in turn was greatly scaled up by Google Brain, who created a system called BERT which reached human-level performance on some of NLP’s toughest challenges.

Over the next year, expect to see fine-tuned language models used for everything from understanding medical texts to building disruptive social media troll armies.”

DIGITAL MANUFACTURING

Andre Wegner | Founder/CEO Authentise, Chair, Digital Manufacturing, Singularity University

“Most surprising to me was the extent and speed at which the industry finally opened up.

While previously, only few 3D printing suppliers had APIs and knew what to do with them, 2018 saw nearly every OEM (or original equipment manufacturer) enabling data access and, even more surprisingly, shying away from proprietary standards and adopting MTConnect, as stalwarts such as 3D Systems and Stratasys have been. This means that in two to three years, data access to machines will be easy, commonplace, and free. The value will be in what is being done with that data.

Another example of this openness are the seemingly endless announcements of integrated workflows: GE’s announcement with most major software players to enable integrated solutions, EOS’s announcement with Siemens, and many more. It’s clear that all actors in the additive ecosystem have taken a step forward in terms of openness. The result is a faster pace of innovation, particularly in the software and data domains that are crucial to enabling comprehensive digital workflow to drive agile and resilient manufacturing.

I’m more optimistic we’ll achieve that now than I was at the end of 2017.”

SCIENCE AND DISCOVERY

Paul Saffo | Chair, Future Studies, Singularity University, Distinguished Visiting Scholar, Stanford Media-X Research Network

“The most important development in technology this year isn’t a technology, but rather the astonishing science surprises made possible by recent technology innovations. My short list includes the discovery of the “neptmoon”, a Neptune-scale moon circling a Jupiter-scale planet 8,000 lightyears from us; the successful deployment of the Mars InSight Lander a month ago; and the tantalizing ANITA detection (what could be a new subatomic particle which would in turn blow the standard model wide open). The highest use of invention is to support science discovery, because those discoveries in turn lead us to the future innovations that will improve the state of the world—and fire up our imaginations.”

ROBOTICS

Pablos Holman | Inventor, Hacker, Faculty, Singularity University

“Just five or ten years ago, if you’d asked any of us technologists “What is harder for robots?  Eyes, or fingers?” We’d have all said eyes. Robots have extraordinary eyes now, but even in a surgical robot, the fingers are numb and don’t feel anything. Stanford robotics researchers have invented fingertips that can feel, and this will be a kingpin that allows robots to go everywhere they haven’t been yet.”

BLOCKCHAIN

Nathana Sharma | Blockchain, Policy, Law, and Ethics, Faculty, Singularity University

“2017 was the year of peak blockchain hype. 2018 has been a year of resetting expectations and technological development, even as the broader cryptocurrency markets have faced a winter. It’s now about seeing adoption and applications that people want and need to use rise. An incredible piece of news from December 2018 is that Facebook is developing a cryptocurrency for users to make payments through Whatsapp. That’s surprisingly fast mainstream adoption of this new technology, and indicates how powerful it is.”

ARTIFICIAL INTELLIGENCE

Neil Jacobstein | Chair, Artificial Intelligence and Robotics, Singularity University

“I think one of the most visible improvements in AI was illustrated by the Boston Dynamics Parkour video. This was not due to an improvement in brushless motors, accelerometers, or gears. It was due to improvements in AI algorithms and training data. To be fair, the video released was cherry-picked from numerous attempts, many of which ended with a crash. However, the fact that it could be accomplished at all in 2018 was a real win for both AI and robotics.”

NEUROSCIENCE

Divya Chander | Chair, Neuroscience, Singularity University

“2018 ushered in a new era of exponential trends in non-invasive brain modulation. Changing behavior or restoring function takes on a new meaning when invasive interfaces are no longer needed to manipulate neural circuitry. The end of 2018 saw two amazing announcements: the ability to grow neural organoids (mini-brains) in a dish from neural stem cells that started expressing electrical activity, mimicking the brain function of premature babies, and the first (known) application of CRISPR to genetically alter two fetuses grown through IVF. Although this was ostensibly to provide genetic resilience against HIV infections, imagine what would happen if we started tinkering with neural circuitry and intelligence.”


Source
Author: Aaron Frank
Image Credit

Blockchain and the 4th Industrial Revolution

Humankind is approaching a new industrial revolution. Unlike the previous three, it will impact every industry and interact between the digital, physical and even biological worlds while approaching at a neck-breaking speed due to the unprecedented amount of breakthroughs. This revolution is evolving at an exponential rather than a linear pace when compared with previous three (1).

This revolution can be summarized with data, its connectivity and what they will be used for. The amount and precedence of connectivity will be unmatched and unlike what the Digital revolution (3rd industrial revolution) has offered.  Furthermore, the number of connected devices is expected to grow to a mind boggling number that could reach somewhere between 28-50 billion by 2020, according to estimates published by multiple CEOs as mentioned in IEEE.

Having such a massive number of connected devices also entails that network speeds will be much faster than current networks. For example, “with 5G, users should be able to download a high-definition film in under a second (a task that could take 10 minutes on 4G LTE). Wireless engineers say these networks will boost the development of other new technologies, such as autonomous vehiclesvirtual reality, and the Internet of Things” (2).

In my opinion, Blockchain technology will be the core tech used among all others that will form the basis of the 4th industrial revolution. For example, artificial intelligence and machine learning will remain a challenge if the trust and privacy in gathered data is not present. Machine learning requires a large amount of data about a particular object, person or a thing in order to learn and predict its behavior and make decisions. However, if this data is breached, corrupted or manipulated it will make the job of machine learning a rather difficult one and thus prone to erroneous or even catastrophic decisions. Blockchain solves all aforementioned issues which will enhance the correct development and valuable uses of machine learning technologies. That, in addition to keeping privacy of users as they can contribute data to the blockchain while choosing what to share.

The pillars of the 4th industrial revolution will become a privacy and security nightmare without blockchain. Take for example the privacy concerns of Facebook. Merge that with the upcoming large number of connected devices and other technologies that will make connectivity embedded even biologically. This will produce an AI for every social network, eCommerce website, business and perhaps everyone that profits from data. Without Blockchain, your news feed, email, phone, social profile(s), and potentially smart home appliances will become an ads nightmare. Likewise, cyber criminals will have new and more powerful tools in their disposal. Machine Learning devices, eCommerce tools and algorithms, and autonomous systems can randomly gather information about you that may not represent your genuine interests.

The main pillar of the 4th industrial revolution is data and its connectivity, and the key issue will be how this data will be used and handled by the complicated society we live in. Privacy breaches, the correct use of data (especially by machine learning technologies), and other potential issues will become a far more complicated problem if Blockchain is not embedded with the rest of these technologies. Blockchain is the checks and balances system that ensures other technologies and actors will not use data in ways that are unauthorized by the owner of said data. Blockchain will also enhance these technologies by giving them the necessary and correct data, thereby avoiding data processing waste and enhance the correct and safe autonomous decision making.


Source
Author: Mohammad A. Edaibat
Image Credit

Britain Is Developing an AI-Powered Predictive Policing System

The tantalizing prospect of predicting crime before it happens has got law enforcement agencies excited about AI. Most efforts so far have focused on forecasting where and when crime will happen, but now British police want to predict who will perpetrate it.

The idea is reminiscent of sci-fi classic Minority Report, where clairvoyant “precogs” were used to predict crime before it happened and lock up the would-be criminals. The National Data Analytics Solution (NDAS) under development in the UK will instead rely on AI oracles to scour police records and statistics to find those at risk of violent crime.

Machine learning will be used to analyze a variety of local and national police databases containing information like crime logs, stop and search records, custody records, and missing person reports. In particular, it’s aimed at identifying those at risk of committing or becoming a victim of gun or knife crime, and those who could fall victim to modern slavery.

The program is being led by West Midlands Police (WMP), which covers the cities of Birmingham, Coventry, and Wolverhampton, but the aim is for it to eventually be used by every UK police force. They will produce a prototype by March of 2019.

What police would do with the information has yet to be determined. The head of WMP told New Scientist they won’t be preemptively arresting anyone; instead, the idea would be to use the information to provide early intervention from social or health workers to help keep potential offenders on the straight and narrow or protect potential victims.

But data ethics experts have voiced concerns that the police are stepping into an ethical minefield they may not be fully prepared for. Last year, WMP asked researchers at the Alan Turing Institute’s Data Ethics Group to assess a redacted version of the proposal, and last week they released an ethics advisory in conjunction with the Independent Digital Ethics Panel for Policing.

While the authors applaud the force for attempting to develop an ethically sound and legally compliant approach to predictive policing, they warn that the ethical principles in the proposal are not developed enough to deal with the broad challenges this kind of technology could throw up, and that “frequently the details are insufficiently fleshed out and important issues are not fully recognized.”

The genesis of the project appears to be a 2016 study carried out by data scientists for WMP that used statistical modeling to analyze local and national police databases to identify 32 indicators that could predict those likely to persuade others to commit crimes. These were then used to generate a list of the top 200 “influencers” in the region, which the force said could be used to identify those vulnerable to being drawn into criminality.

The ethics review notes that this kind of approach raises serious ethical questions about undoing the presumption of innocence and allowing people to be targeted even if they’ve never committed an offense and potentially never would have done so.

Similar approaches tested elsewhere in the world highlight the pitfalls of this kind of approach. Chicago police developed an algorithmically-generated list of people at risk of being involved in a shooting for early intervention. But a report from RAND Corporation showed that rather than using it to provide social services, police used it as a way to target people for arrest. Despite that, it made no significant difference to the city’s murder rate.

Its also nearly inevitable this kind of system will be at risk of replicating the biases that exist in traditional policing. If police disproportionately stop and search young black men, any machine learning system trained on those records will reflect that bias.

A major investigation by ProPublica in 2016 found that software widely used by courts in the US to predict whether someone would offend again, and therefore guide sentencing, was biased against blacks. Its not a stretch to assume that AI used to profile potential criminals would face similar problems.

Bias isn’t the only problem. As the ACLU’s Ezekiel Edwards notes, the data collected by police is simply bad. It’s incomplete, inconsistent, easily manipulated, and slow, and as with all machine learning, rubbish in equals rubbish out. But if you’re using the output of such a system to intervene ahead of whatever you’re predicting, it’s incredibly hard to asses how accurate it is.

All this is unlikely to stem the appetite for these kinds of systems, though. Police around the world are dealing with an ever-changing and increasingly complex criminal landscape (and in the UK, budget cuts too) so any tools that promise to make their jobs easier are going to seem attractive. Let’s just hope that agencies follow the lead of WMP and seek out independent oversight of their proposals.


Source
Author: Edd Gent
Image Credit

AI Used To Create Synthetic Fingerprints, Fools Biometric Scanners

Fingerprint scanners have become pretty commonplace for use on smartphones and in some cases, even laptops. Given that we all have unique fingerprints, it makes sense that fingerprints are used as a form of biometric security, but unfortunately it seems that it might not be as secure as we think anymore.

This is according to recent research done by researchers from New York University where they have found (via Gizmodo) that AI can actually be used to generate a synthetic fingerprint that can actually fool biometric scanners. Dubbed the DeepMasterPrints, it seems that the system managed to replicate 23% of fingerprints within a system that supposedly has an error rate of one in a thousand.

This also takes advantage of the fact that most biometric systems don’t merge partial prints together to form a full image, and also the fact that when you scan your prints, only the surface touching the scanner is read. This means that through trial and error, these AI-generated prints could potentially be used by hackers to bypass a system, similar to how brute force attacks work when attempting to figure out a password.

Now before you get too worried about hackers using this system to bypass your phone’s fingerprint system, lead researcher Philip Bontrager said,

“A similar setup to ours could be used for nefarious purposes, but it would likely not have the success rate we reported unless they optimized it for a smartphone system. This would take a lot of work to try and reverse engineer a system like that.”

The researchers are also hoping that their work will prompt companies to come up with more secure systems in the future.


Source
Author: Tyler Lee 
Image Credit

Social robotics platform reveals a new face for conversational artificial intelligence

A new social robotics platform, featuring a robot with a human like face, that can communicate showing human like characteristics, is set to advance conversational artificial intelligence.

What sets the social robotics platform aside from other robots in the world today

“is the face”, says Furhat Robotics, which it claims uses “computer animation to create incredibly immersive characters and experiences.”

Furhat says that the social robotics platform can communicate with humans the way we do with each other — at least some of us — by speaking, listening, showing emotions and having conversations.

Apparently, it has been specifically designed to create an intuitive, humanlike and engaging experience making the way that people interact with technology much more natural.

The company has also developed a powerful and sophisticated platform specifically for multisensory and immersive language interactions, enabling developers, working in any industry sector and from any country, to build highly advanced applications for social robots.

It says that 70 international brands including Disney, Merck and Honda are already working with Furhat, but that this new platform takes the concept of conversational AI to the next level. It claims that the social robotics platform will create new social and business opportunities and accelerate the development process.

It’s an impressive advancement and for those of us who watch this space, an exciting move forward.

But are humans ready to accept life-like robots? According to a study carried out by Kumar Yogeeswaran et al, humans might well perceive very humanlike robots as a realistic threat to human jobs, safety, and resources, as well as a threat to human identity and uniqueness, especially if such robots also outperform humans.

However, the study only suggested that such life-like robots were seen as a threat if the social robotics technology can outperform humans in physical and mental tasks.

AI voice search is tipped to be one of the main trends going mainstream in 2017 and data location will be a crucial attribute in voice activation searches

On the other hand, a study published in Jamda, investigating how robots could support people feeling lonely, found that residents of a rest home who interacted with a robot in the form of a seal, saw a positive benefit.

Maybe the fundamental problem is fear of the unknown and once conversational AI and social robotics becomes more common, people’s fears and concerns will dissipate.

Returning to Furhat, Said Samer Al Moubayed, the company’s CEO said:

“This is the culmination of many years of dedicated research and development both internally and through working with industry and technology partners. From its beginnings we have taken Furhat to a point where social robots are no longer a hope for the future but a reality of today”.


Source
Author: Michael Baxter
Image Credit

This new AI can track 200 eye movements to determine your personality traits

  • German scientists have developed software that can detect character traits through eyetracking, according to a new study.
  • Using over 200 physical behaviours, such as the frequency with which subjects blinked, the researchers established connections between eye movements and personality traits.
  • It’s been suggested that the technology could eventually be developed to assist those with autism to gauge their peers’ emotions and responses.

Anyone who’s seen “2001: A Space Odyseey” will already be familiar with that nagging fear that Artificial Intelligence will one day become so intelligent, it will simply turn on us — but computer scientists from Saarbrücken and Stuttgart don’t seem at all phased at the prospect.

According to a new study, they’ve developed software that can recognise personality traits through eye-tracking.

“There are already some studies in the field of emotion recognition through facial expression analysis. But this is the first time we’ve managed to establish personal traits,” said Andreas Bulling in an interview with Business Insider.

Head of the research group “Perceptual User Interfaces” at the Max Planck Institute for Computer Science at Saarland University in Saarbrücken, he joined forces with scientists from Stuttgart and psychologists from the University of South Australia to develop software that can recognise character traits through eye-tracking.

The study found personal traits are more stable than emotions

As part of their research, the scientists had 50 students, all of whom were equipped with eye-trackers recording their eye movements, walk across their campus for around 10 minutes and buy something from one of the campus shops.

Subjects were asked to complete questionnaires commonly used to evaluate people’s personality traits.

Using over 200 markers such as the frequency with which subjects blinked, how long they focused on something, and to what extent their pupils dilated, the researchers managed to establish which traits were associated with which eye movements.

While there are currently no plans for Bulling’s team to get involved in robotics, he says there’s a lot of scope for the software to be used in the field. Sergei Karpukhin/Reuters

Using the data obtained from the study, the scientists developed “decision trees” for various personality traits, after which the software was able to recognise those characteristics specified. Through eye tracking, the software can “see” whether someone is conscientious, sociable, tolerant, and even to what extent they might be emotionally unstable. It can also pick up on how inquisitive someone is.

However, what makes the AI even more special is that it doesn’t require updates; rather, it’s a “machine-learning” piece of software.

“It’s novel in itself that this kind of software works at all,” said Bulling. Emotions can vary from one situation to the next so they’re often changing regularly and quickly; character traits, however, are very stable.

“Initially, it wasn’t clear that it was possible to assess character traits from eye movements,” said the expert.

The next step is to improve the software’s performance: The experts want to focus on analysing body language too. The experts have expressed that, while it isn’t their own vision to incorporate the software into hardware, there is great scope for the application of this software in the robotics field.

The software could even assist those with autism

Eyetracking glasses equipped with this software could assist those struggling with autism to better understand their peers’ behaviour and emotions. Shutterstock

“Like anything, what we’ve made can be used for both good and bad,” said Bulling, but he hopes the software will lead to better and more authentic interaction between man and machine.

Just as an example, the eye-tracking software could eventually be installed in cars, which would then be able to recognise whether drivers are more willing to take risks or not.

If the software is sufficiently developed, it could even be used to assist people with autism: With eye-tracking glasses, those struggling to gauge their peers’ reactions and feelings could better understand others’ behaviour.

However, whether the software is used positively or negatively, Bulling said is not his responsibility: “That’s ultimately for society and the politicians to decide.”

Source
Author: Katharina Maß
Image Credit

Eye eye! DeepMind teams up with doctors to ogle eyeballs for illness

AI can help speed up diagnosis and seldom gets it wrong

AI can help ophthalmologists diagnose more than 50 common eye diseases from retinal scans, according to a paper published in Nature Medicine on Monday.

LIONBIT

Researchers from DeepMind, University College London (UCL), and Moorfields Eye Hospital in London, have developed a system based on two neural networks that analyses 3D optical coherence tomography scans.

First, a convolutional neural network processes the scanned image and converts it into a tissue map, detailing its anatomy and features. This is then fed into a second convolutional neural network that analyses the eye tissues in detail to diagnose diseases and recommend treatment plans, such as if the patient needs to be book a further appointment with an ophthalmologist or not.

It can, apparently, recommend the right referral procedure with 94 per cent accuracy for over 50 diseases such as age-related macular degeneration, a common condition amongst the elderly.

Unlike other systems, this one doesn’t have to be trained on millions of different scans, according to the paper. “Automated diagnosis of a medical image, even for a single disease, faces two main challenges: technical variations in the imaging process (different devices, noise, ageing of the components and so on), and patient-to-patient variability in pathological manifestations of disease.”

“Existing deep learning approaches tried to deal with all combinations of these variations using a single end-to-end black-box network, thus typically requiring millions of labeled scans.”

By mapping the eye scans into different components first, the system simplifies and reduces the amount of variation in eye scans making it easier to analyze. Other systems need to be exposed to more images so that it can generalize better. This one, however, required only 14,884 scans for training from 7,621 patients.

TIP

Machine learning tools will help speed up the process of diagnoses and treatment for patients, said Pearse Keane, co-author of the paper and an ophthalmologist at Moorfields Eye Hospital and a clinician scientist at UCL.

“The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”


Here at Dollar Destruction, we endeavour to bring to you the latest, most important news from around the globe. We scan the web looking for the most valuable content and dish it right up for you! The content of this article was provided by the source referenced. Dollar Destruction does not endorse and is not responsible for or liable for any content, accuracy, quality, advertising, products or other materials on this page. As always, we encourage you to perform your own research!

Source
Author: Katyanna Quach
Image Credit

Don’t forget to join our facebook page for Crypto, Business & Technology news delivered to you daily.

Your vegetables are going to be picked by robots sooner than you think

In the very near future, robots are going to be picking the vegetables that appear on grocery store shelves across America.

The automation revolution that’s arrived on the factory floor will make its way to the ag industry in the U.S. and its first stop will likely be the indoor farms that are now dotting the U.S.

LIONBIT

Leading the charge in this robot revolution will be companies like Root AI, a young startup which has just raised $2.3 million to bring its first line of robotic harvesting and farm optimization technologies to market.

Root AI is focused on the 2.3 million square feet of indoor farms that currently exist in the world and is hoping to expand as the number of farms cultivating crops indoors increases. Some estimates from analysis firms like Agrilyst put the planned expansions in indoor farming at around 22 million square feet (much of that in the U.S.).

While that only amounts to roughly 505 acres of land — a fraction of the 900 million acres of farmland that’s currently cultivated in the U.S. — those indoor farms offer huge yield advantages over traditional farms with a much lower footprint in terms of resources used. The average yield per acre in indoor farms for vine crops like tomatoes, and leafy greens, is over ten times higher than outdoor farms.

Root AI’s executive team thinks their company can bring those yields even higher.

Founded by two rising stars of the robotics industry, the 36 year old Josh Lessing and 28 year old Ryan Knopf, Root is an extension of work the two men had done as early employees at Soft Robotics, the company pioneering new technologies for robotic handling.

Spun out of research conducted by Harvard professor George Whiteside, the team at Soft Robotics was primarily comprised of technologists who had spent years developing robots after having no formal training in robot development. Knopf, a lifetime roboticist who studied at the University of Pennsylvania was one of the sole employees with a traditional robotics background.

TIP

“We were the very first two people at Soft developing the core technology there,” says Lessing. “The technology is being used for heavily in the food industry. What you would buy a soft gripper for is… making a delicate food gripper very easy to deploy that would help you maintain food quality with a mechanical design that was extremely easy to manage. Like inflatable fingers that could grab things.”

Root AI co-founders Josh Lessing and Ryan Knopf

It was radically different from the ways in which other robotics companies were approaching the very tricky problem of replicating the dexterity of the human hand. “From the perspective of conventional robotics, we were doing everything wrong and we would never be able to do what a conventional robot was capable of. We ended up creating adaptive gripping with these new constructs,” Lessing said.

While Soft Robotics continues to do revolutionary work, both Knopf and Lessing saw an opportunity to apply their knowledge to an area where it was sorely needed — farming. “Ag is facing a lot of complicated challenges and at the same time we have a need for much much more food,” Lessing said. “And a lot of the big challenges in ag these days are out in the field, not in the packaging and processing facilities. So Ryan and I started building this new thesis around how we could make artificial intelligence helpful to growers.”

The first product from Root AI is a mobile robot that operates in indoor farming facilities. It picks tomatoes and is able to look at crops and assess their health, and conduct simple operations like pruning vines and observing and controlling ripening profiles so that the robot can cultivate crops (initially tomatoes) continuously and more effectively than people.

Root AI’s robots have multiple cameras (one on the arm of the robot itself, the “tool’s” view, and one sitting to the side of the robot with a fixed reference frame) to collect both color images and 3D depth information. The company has also developed a customized convolutional neural network to detect objects of interest and label them with bounding boxes. Beyond the location of the fruit, Root AI uses other, proprietary, vision processing techniques to measure properties of fruit (like ripeness, size, and quality grading). All of this is done on the robot, without relying on remote access to a data-center. And it’s all done in real time.

Tools like these robots are increasingly helpful, as the founders of Root note, because there’s an increasing labor shortage for both indoor and outdoor farming in the U.S.

Meanwhile, the mounting pressures on the farm industry increasingly make robotically assisted indoor farming a more viable option for production. Continuing population growth and the reduction of arable land resulting from climate change mean that indoor farms, which can produce as much as twenty times as much fruit and vegetables per square foot while using up to 90% less water become extremely attractive.

Suppliers like Howling Farms, Mucci Farms, Del Fresco Produce and Naturefresh are already producing a number of fruits and vegetables for consumers, said Lessing. “They’ve really fine tuned agriculture production in ways that are meaningful to broader society. They are much more sustainable and they allow you to collocate farms with urban areas [and] they have a much more simplified logistics network.”

That ability to pare down complexity and cost in a logistics supply chain is a boon to retailers like Walmart and Whole Foods that are competing to provide fresher, longer lasting produce to consumers, Lessing said. Investors, apparently agreed. Root AI was able to enlist firms like First Round Capital. Accomplice, Schematic Ventures, Liquid2 Ventures and Half Court Ventures to back its $2.3 million round.

“There are many many roles at the farm and we’re looking to supplement in all areas,” said Lessing. “Right now we’re doing a lot of technology experiments with a couple of different growers. assessment of ripeness and grippers ability to grab the tomatoes. next year we’re going to be doing the pilots.”

And as global warming intensifies pressures on food production, Lessing sees demand for his technologies growing.

“On a personal level I have concerns about how much food we’re going to have and where we can make it,” Lessing said. “Indoor farming is focused on making food anywhere. if you control your environment you have the ability to make food…. Satisfying people’s basic needs is one of the most impactful things i can do with my life.”


Here at Dollar Destruction, we endeavour to bring to you the latest, most important news from around the globe. We scan the web looking for the most valuable content and dish it right up for you! The content of this article was provided by the source referenced. Dollar Destruction does not endorse and is not responsible for or liable for any content, accuracy, quality, advertising, products or other materials on this page. As always, we encourage you to perform your own research!

Source
Author: Jonathan Shieber
Image Credit

Don’t forget to join our facebook page for Crypto, Business & Technology news delivered to you daily.

Facebook poaches top Google engineer

Facebook has hired one of Google’s lead chip developers, Shahriar Rabii, to help the social network in its ongoing effort to design its own silicon, according to a report from Bloomberg. Facebook hopped on the chip-developing bandwagon earlier this year, when they started to build a team that could design custom chips to power server and consumer hardware. Rabii’s new role at Facebook will be as a vice president and head of silicon, according to an updated Linkedin bio.



It’s a move that’s on trend with other tech giants, many of which are bringing chip design in-house rather than relying on big-name suppliers like Intel and Qualcomm. Apple has been creating its own custom processors for iOS devices for nearly a decade, and it has designed custom single-purpose chips for artificial intelligence and other tasks in recent years. The iPhone maker is also reportedly planning on using its own chips to replace the Intel processors for their Mac computers by 2020. Earlier this year, Amazon reportedly embarked on a new initiative to design its own chips, specifically to help power AI features for its Echo line of smart speakers.

Google produces their own custom Visual Core chips for the Pixel smartphones — and Rabii, Facebook’s recent hire, had previously led the team that developed them. In his new role, Rabii isn’t likely to be developing chips for Facebook-branded smartphones, but the company is working on several types of hardware that could use a custom processor.

Earlier this year, Facebook-owned Oculus VR launched the standalone Oculus Go virtual reality headset that currently relies on a Qualcomm-branded chip. Future models may use custom Facebook chips instead. The company is also reportedly developing its own series of Echo Show-like smart speakers with AI features, and a custom chip may give Facebook a competitive advantage in the home.

The custom chips could also be used to better train the AI algorithms that Facebook has patrolling its site for hate speech, fake accounts, and potentially dangerous content. Right now, the company uses modified third-party GPUs from companies like Nvidia. Designing its own AI training servers with proprietary chips, as Google does with its Tensor Processing Units, could help with the very tricky problem of using AI instead of human eyes to police its ever-growing platform.


Here at Dollar Destruction, we endeavour to bring to you the latest, most important news from around the globe. We scan the web looking for the most valuable content and dish it right up for you! The content of this article was provided by the source referenced. Dollar Destruction does not endorse and is not responsible for or liable for any content, accuracy, quality, advertising, products or other materials on this page. As always, we encourage you to perform your own research!

Source
Author: Shoshana Wodinsky
Image Credit: Alex Castro / The Verge

Don’t forget to join our Telegram channel for Crypto, Business & Technology news delivered to you daily.