Not a day goes by when one doesn’t come across the term ‘ethical watchdog’. With enterprises mainstreaming AI applications and consumer AI taking off, concerns around data misuse, algorithmic bias abound. The recent GDPR guidelines that put the focus on the need for ethics and governance in AI is a laudable attempt to bring together legal and technical communities to help form better policy in the future.
According to an IDC report, by 2019, 40% of digital transformation initiatives will use AI services. And by 2021, 75% of commercial enterprise apps will use AI. Developers will soon become the drivers of growth and the critical population to watch as IT organizations would hire AI engineers and data scientists to support the large number of DX initiatives that are AI dependent.
Given the rise of an algorithmic economy, ethics and data protection would come into sharp focus, noted Giovanni Buttarelli, European Data Protection Supervisor. There would be a proliferation of consumer AI applications powered by sophisticated algorithms – which means besides AI, another key area where AI is gaining momentum is governance. Making technology work for in the interests of human being would become a critical component.
In view of the recent trends – AI will open new avenues and career paths for professionals. While AI will remain a tech-dominated field, there will be a growing need for academicians and researchers from universities and think tanks from field like economic, sociology, philosophy and a background in emerging technology policy to work with a team of interdisciplinary experts and steer the company’s AI policy and improve digital-human interface.
Large enterprises like Microsoft and Google-owned DeepMind which have a strong position in the AI ecosystem are already putting the building blocks in place by setting up an Ethics & Impact team, focused on understanding the social effects and ethical challenges surrounding the emerging technology of artificial intelligence.
Microsoft set up FATE: Case in point – Microsoft’s newly minted research group called FATE – that are working on collaborative research projects that address the need for transparency, accountability, and fairness in AI and ML systems. Also, the group publishes in a wide array of disciplines, including machine learning, information retrieval, systems, sociology, political science, and science and technology studies.
The group addresses important ethical questions such as – the best use AI to assist users and offer enhanced insights, while avoiding exposing them to different types of discrimination in health, housing, law enforcement, and employment? In the same vein, how can AI applications balance the need for efficiency and exploration with fairness and sensitivity to users? As we move toward relying on intelligent agents in our everyday lives, how do we ensure that individuals and communities can trust these systems?
DeepMind sets up Ethics & Impact team: Earlier last year in October, London-headquartered DeepMind set up an Ethics & Society research unit to build artificial intelligence applications that works for the benefit of all. To that effect, DeepMind has hired scientists and practitioners from diverse field – academia and charitable organization. The company has brought in American economist and Columbia Professor Jeffrey Sachs, Oxford AI professor Nick Bostrom, and climate change campaigner Christiana Figueres to advise the DeepMind team and support open research and investigation into the wider impacts of their work.
Partnership of AI: Earlier in 2016, big tech companies like Amazon, Facebook, Google, Microsoft, Apple, and IBM joined hands to build a first-of-its-kind industry-led consortium that included well-known academicians and nonprofit researchers to help build ethical technologies and ensure the trustworthiness of AI. From developing best practices to advancing public understanding, the consortium regularly engages experts from various disciplines such as psychology, philosophy, economics, finance, sociology, public policy, and law to provide guidance on AI-related issues and its impact on society.
The Berkman Klein Center and the MIT Media Lab: These two institutes are conducting evidence-based research with an aim to provide guidance to key decision-makers from the public and private sectors and deliver high impact-oriented pilot projects to bolster the use of AI for the public good. The centres, through research efforts will also build up institutional knowledge base on the ethics and governance of AI and strengthen the interface with industry and policy-makers.
The recent upheaval caused by GDPR regulations which will come into effect this year forced a lot of enterprises, big and small to review and rewrite their data governance guidelines and overhauled the systems. If there is ever a time to jumpstart your career in this field – it is now. Research groups are expanding their stakeholder community and are always on a lookout for key positions – such as Director of Research, Director of Partnerships, and Program Associate. You can access the details here.
DeepMind recently announced its hiring Policy & Ethics Researcher and some of the requirements are:
You can access the job description here.
The three core areas where AI researchers from the Ethics team work on are – AI policy, AI governance & AI strategy. This work, according to Oxford’s Future of Humanity Institute touches on a range of topics and areas of expertise such as international relations, international institutions and global cooperation, international law and international political.
According to Miles Brundage, well-known AI policy researcher at the University of Oxford’s Future of Humanity Institute, there are four main roles in this area:
Candidates work on a range of topics – improving public opinion about AI, bridging between the short-term and long-term AI policy and case studies in comparisons with related technologies.
Most of these job openings look for graduates who have a background in international relations, economics, psychology, law and have a deep interest in emergent technology and most importantly have a good grasp on complex technical and regulatory issues.
Those who wish to work for AI advocacy could find jobs in big tech companies and government think tanks where the role would entail building a broader awareness about AI through conferences, literature and advocating its growth.
Some of the job titles are:
Usually professionals from the government sector, non-profit and academia can land a top slot in enterprises looking for external advisors from diverse fields to contribute ideas and ensure fairness of commercial AI applications. Well, professionals who have a strong technical background in AI are well-suited to develop and oversee AI ethics committee and evaluate the recommendations posed by the team.
From consulting firms such as Deloitte, McKinsey, Nielsen to government think tanks (Niti Aayog, AI Task Force) to academic institutions (Wadhwani Institute for AI — India’s first AI research centre), enterprises and India’s robotics startups, there is a growing need for AI strategists and policymakers who can straddle the field of law, governance, ethics and policy-making. Besides these areas, external advisors can also pursue opportunities in for-profit organizations/government bodies that keep an eye on competing nations and the changing landscape of AI.
Tech’s biggest companies are placing huge bets on artificial intelligence, banking on things ranging from face-scanning smartphones and conversational coffee-table gadgets to computerized health care and autonomous vehicles.