There is no doubt AI will transform society, and there is a big need to safeguard against improper use.
TOI Contributor | March 16, 2019, 09:04 IST
For More Info: cio.economictimes.indiatimes.com
A redeeming feature of the digital age is the shortening of time lags between when new technology first appears in developed countries and its adoption across the world. Sometimes developing countries leapfrog a generation – as India did with telecom – when we seamlessly skipped mass penetration of land lines to ubiquitous mobile phone penetration; or more recently when mobile-based payments have acquired greater acceptance than credit cards did over decades.
Much like this, artificial intelligence, machine learning (AI, ML) and robotics technologies are silently but increasingly automating work tasks. Some of it is already in use. Email comes with filters and smart replies; maps and ride sharing apps use AI in route planning and pricing; online retailers use AI to understand your preferences and buying habits so they personalise your shopping experience. Music streaming sites provide AI-curated personalised playlists, and the 24-hour customer helpdesk you use online probably doesn’t have a human at the other end.
As voice and facial recognition continue to evolve, machine learning algorithms are getting more capable than ever. Deep neural networks now give AI the ability to learn – moving from doing tasks to solving problems independent of subsequent instructions. Venture capital funding of AI companies globally soared 72% last year, hitting a record $9.3 billion in 2018.
But as AI becomes increasingly embedded in our society, it will change how we work and live. There is understandable fear that AI will cause disruption and take away jobs. Policy makers will need to address this given already existing concerns about rising unemployment. IDC predicts that by 2024, half of structured and repeatable tasks will be automated in workplaces, and 20% of knowledge workers will have AI-infused software or a digitally connected technology as a coworker. According to a McKinsey report, job profiles characterised by repetitive activities could experience the largest decline as a share of total employment from 40% to around 30% by 2030.
Policy makers will have to show bold leadership, and companies will have to address the mammoth task of skilling and reskilling people to work with AI. Individuals will need to adjust to a new world in which job turnover could be more frequent as they transition to new types of employment, and the likely necessity to continually refresh and update their skills to match a dynamically changing job market.
Being among the world’s fastest growing major economies, India must ready itself for the AI onslaught. Last June Niti Aayog released a paper arguing that India can position itself as a leader in AI and coined the term #AIforAll. Niti Aayog directs India’s AI focus on five sectors: healthcare, agriculture, education, smart cities and infrastructure, and smart mobility and transportation. The paper underlines the need for privacy and security including evolving norms for regulation and anonymisation of data.
According to a LinkedIn report, India ranks third after the US and China with the highest penetration of AI skills among its workforce. The number of LinkedIn members adding AI skills to their profiles increased 190% between 2015 and 2017 – the fastest growing skill set, by far.
Besides the productivity and efficiency advantages AI brings in, there is also the moral dimension of machines smart enough to make decisions. Machine learning algorithms are building personality profiles on every human being. AI algorithms can learn your behaviour, and before you know it, they know you better than you know yourself. Nobel Laureate Joseph Stiglitz says: “Artificial intelligence and robotisation have the potential to increase the productivity that could make everybody better off, but only if they are wellmanaged.” Stiglitz makes a distinction between AI that replaces workers and AI that helps people do their jobs better.
Amongst the better uses of machine learning is the interpretation of medical data: for some kinds of cancers and other disorders computers are already better than humans at spotting the dangerous patterns in a scan. Researchers at New York’s Icahn School of Medicine have used AI to scour electronic health records from 7,00,000 patients to predict disease risk factors for 78 diseases so successfully that doctors now turn to the system to help diagnose illnesses.
On the other hand, China’s use of machine learning for political repression has gone well beyond surveillance cameras. A recent report from a government thinktank praised the software’s power to “predict the development trajectory for internet incidents, pre-emptively intervene in and guide public sentiment to avoid mass online public opinion outbreaks, and improve social governance capabilities”.
Last year saw the conceptual boundaries of AI pushed further. Google’s DeepMind built a machine that can teach itself the rules of games and after two or three days of concentrated learning, beat every human and every other computer player there has ever been. So a machine capable of learning so fast, it can conquer all. Elon Musk warned of the need of some regulatory oversight on AI at a national or international level just to make sure humans don’t do something very foolish. As he put it, “With AI we’re summoning the devil.”
There is no doubt AI will transform society, and there is a big need to safeguard against improper use. Creating ethical AI will take more than just intent. It will require far greater collaboration between industry, governments and technology experts. By working with experts, regulators can put in place standards that protect us while ensuring AI can augment humans safely, so that we can continue to reap the potential of these smart machines and not be subservient to them.