Categories
implications Weekly

AI Weekly: The implications for self-driving tractor and coming AI regulations

January 7, 2022 4:30 PM Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more It’s 2022, and developments in the AI industry are off to a slow — but nonetheless eventful — start. While the spread of…

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


It’s 2022, and developments in the AI industry are off to a slow — but nonetheless eventful — start. While the spread of the Omicron variant put a damper on in-person conferences, enterprises aren’t letting the pandemic disrupt the course of technological progress.

John Deere showed a tractor that uses AI and finds a way to navigate a field by itself. It can also plow the soil on its own without any instructions. As Wired’s Will Knight point outs, it and — self-driving tractors like it — could help to address the growing labor shortage in agriculture; employment of agriculture workers is expected to increase just 1% from 2019 to 2029. They also raise questions about vendor lock in and the role of humans as farmers alongside robots.

Farmers could be more dependent on Deere’s decision-making systems. Deere could also use data from autonomous tractors to create features it can then gate behind a subscription. This would take away farmers’ autonomy.

Driverless tractor are an example of the increasing role of automation in all industries. Numerous reports have shown that while AI may lead to greater productivity, profitability, creativity, and innovation, it won’t always be equally distributed. AI will be a complement to skilled workers in areas such as health care, where they are not available. However, AI can be used to support or replace jobs in industries that rely on routines.

A report by American University suggests that legislators address these gaps by focusing on restructuring school curricula to reflect the changing skill demands. Regulation has a role to play, too, in preventing companies from monopolizing AI in certain industries to pursue consumer-hostile practices. It is still difficult to find the right solution, or, better yet, a combination of solutions. The mass-market introduction of self-driving tractor is another example of how technology sometimes runs ahead of policymaking.

Regulating algorithms

Speaking of regulators, China this week further detailed its plans to curtail the algorithms used in apps to recommend what consumers buy, read, and watch online. According to a report in South China Morning Post, companies that use these types of “recommender” algorithms will be required to “promote positive energy” by allowing users to decline suggestions offered by their services.

This move will have a significant impact on corporate giants such as Tencent, Alibaba, and TikTok owner ByteDance. It is intended to bring down the Chinese tech industry. It also represents a wider effort by governments to stop the abuse of AI technology for profit at all costs.

Beyond the European Union’s (EU) comprehensive AI Act, a government think tank in India has proposed an AI oversight board to establish a framework for “enforcing responsible AI principles.” In the U.K., the government launched a national standard for algorithmic transparency, which recommends that public sector bodies in the country explain how they’re using AI to make decisions. And in the U.S., the White House released draft guidance that includes principles for U.S. agencies when deciding whether — and how — to regulate AI.

A recent Deloitte report predicts that 2022 will see increased discussion about regulating AI “more systematically,” although the coauthors concede that enacting proposals into regulation will likely happen in 2023 (or beyond). Some jurisdictions may even try to ban — and, indeed, have banned — whole subfields of AI, like facial recognition in public spaces and social scoring systems, the report notes.

Why now? AI is becoming pervasive and ubiquitous, which is attracting greater regulatory scrutiny. The technology’s implications for fairness, bias, discrimination, diversity and privacy are also coming into clearer view, as is the geopolitical leverage that AI regulations could give countries that implement them early.

Regulating AI is not an easy task. It will be difficult to audit AI systems. Also, it is not always possible to ensure that the data used in training them is accurate and complete (as required by the EU’s AI Act). Additionally, different countries may pass contradicting regulations making it difficult for companies to adhere to all of them. Deloitte suggests that the best-case scenario is the creation of a “golden standard” such as the EU’s General Data Protection Regulation about privacy .

” More regulations will be passed over AI in the near future. Though it’s not clear exactly what those regulations will look like, it is likely that they will materially a

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *