British graduate visas
Britain’s government slaloms between celebrating the international students who prop up its universities and trying to stop their supply. Home Office data suggest that the number of international students admitted to universities in 2023 was 70pc higher than five years ago. These swelling student numbers have pushed up Britain’s much-watched net migration figures. Earlier this year the government ordered a review of a policy crucial in attracting foreign bookworms called the “graduate route”. It permits most foreign students to live and work in Britain for two years after they finish their studies. Critics warn that some foreigners are enrolling on courses primarily to qualify for the post-study work rights, rather than for the credential itself. The Conservative Party, eager to look tough on immigration, seemed to hope that its Migration Advisory Committee would recommend junking the programme. Instead, the MAC’s report, published last week, gave it full support.
(Adapted from “Advisers To British Government: Don’t Mess With Graduate Visas,” by The Economist, published on May 14, 2024)
AI’s sense of responsibility
As artificial intelligence plays an ever-larger role in automated systems and decision-making processes, the question of how it affects humans’ sense of their own agency is becoming less theoretical — and more urgent. It’s no surprise that humans often defer to automated decision recommendations, with exhortations to “trust the AI!” spurring user adoption in corporate settings. However, there’s growing evidence that AI diminishes users’ sense of responsibility for the consequences of those decisions. AI practice is concerned with legal responsibility, wherein an individual or corporate entity is held responsible typically via civil law, and moral responsibility, wherein individuals are held accountable via punishment, as in criminal law. However, the sense of responsibility entails critical thinking and predictive reflection on the purpose and possible consequences of one’s actions, not only for oneself but for others. It’s this sense of responsibility that AI and automated systems can alter.
(Adapted from “How AI Skews Our Sense of Responsibility,” by Ryad Titah, published on May 13, 2024, by MIT Sloan Management Review)
Owning words for authority
The practice of invoking distant interests by making statements like “The CEO needs this by the close of play” or “The board needs answers immediately” is commonplace and natural. But when managers rely on this approach to excess, it is usually an indicator of one of two issues in the organisation: Either managers possess too little autonomy and are compelled to speak the words of others (typically, the organisation’s leaders), or, as is the case in most organisations, the habit of ventriloquism has become so ingrained that managers act as others’ mouthpieces without giving the practice much thought. So by routinely saying “The CEO needs …,” for instance, a manager can create the perception, both in their own mind and among colleagues, that they lack authority. Over time, speaking for others in this way engenders a managerial culture where responsibility is forever being passed on to someone else, with no one willing to take ownership of decisions.
(Adapted from “Own Your Words to Gain Authority,” by David Hollis and Alex Wright, published on January 24, 2024, by MIT Sloan Management Review)
Teams of LLMs
Ask ChatGPT to recommend the must-do activities on a holiday to Berlin, and OpenAI’s chatbot will do a great job of proposing restaurants, bars, museums and parks that it reckons you might like. But ask it to plan your trip — complete with details of which order to see the sights in, which train tickets to buy and where to eat, all within a set budget — and it will disappoint. There is a way, however, to make large language models (LLMs) perform such complex jobs: make them work together. Teams of LLMs — known as multi-agent systems — can assign each other tasks, build on each other’s work or deliberate over a problem in order to find a solution that each one, on its own, would have been unable to reach.
(Adapted from “Today’s AI Models Are Impressive. Teams Of Them Will Be Formidable,” by The Economist, published on May 13, 2024)
Published in Dawn, The Business and Finance Weekly, May 20th, 2024
Dear visitor, the comments section is undergoing an overhaul and will return soon.