The rapidly advancing area of artificial intelligence will require a new field of law and new regulations governing a growing pool of businesses involved, according to Microsoft Corp., a 25-year participant in AI research.

Companies making and selling AI software will need to be held responsible for potential harm caused by "unreasonable practices" –  if a self-driving car program is set up in an unsafe manner that causes injury or death, for example, Microsoft said. And as AI and automation boost the number of laborers in the gig-economy or on-demand jobs, Microsoft said technology companies need to take responsibility and advocate for protections and benefits for workers, rather than passing the buck by claiming to be "just the technology platform'' enabling all this change.  

Microsoft broaches these ideas in a 149-page book entitled "The Future Computed," which will also be the subject of a panel at the World Economic Forum in Davos, Switzerland, next week. As Redmond, Washington-based Microsoft seeks to be a leader in AI and automating work tasks, it's also trying to get out in front of the challenges expected to arise from promising new technologies, such as job losses and everyday citizens who may be hurt or disadvantaged by malfunctioning or biased algorithms.

Complete your profile to continue reading and get FREE access to CUTimes.com, part of your ALM digital membership.

  • Critical CUTimes.com information including comprehensive product and service provider listings via the Marketplace Directory, CU Careers, resources from industry leaders, webcasts, and breaking news, analysis and more with our informative Newsletters.
  • Exclusive discounts on ALM and CU Times events.
  • Access to other award-winning ALM websites including Law.com and GlobeSt.com.
NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.