Amazon will invest up to $4bn (£3.3bn) in San Francisco-based AI firm Anthropic, mirroring the earlier tie-up between Microsoft and OpenAI. Amazon claimed the investment could help improve customer experiences.
Founded in 2021, Anthropic, an AI safety and research company, is one of several AI start-ups that have recently emerged to compete with firms like Google DeepMind and OpenAI.
Amazon is a major provider of so-called cloud computing services. In simple terms it rents out computing power - housed in huge warehouses full of computers called data centres - to other firms to help store or process their data. The collaboration means Anthropic will be able to draw on this huge computing power.
In turn, Amazon developers will be able to use Claude 2, the latest version of Anthropic’s foundation AI model, to create new applications for its customers and enhance existing ones.
Companies are throwing billions at AI … but no-one’s in control - when AI knows that then it will take over the world …
I was just reading this article yesterday. The co-founder of Deep Mind talks about his new venture. Amazon’s $4B might be a drop in the bucket considering this guy’s one company just raised $1.5B from people like Bill Gates.
He also thinks that regulation of AI won’t be that difficult.
IMO, Suleyman is naïve … and he thinks that the rest of the techno world should follow him.
He also calls for robust regulation—and doesn’t think that’ll be hard to achieve.
In China and Russia …
Suleyman launched DeepMind Health in early 2016 and set up research collaborations with some of the UK’s state-run regional health-care providers. However, DeepMind had failed to comply with data protection regulations when accessing records from some 1.6 million patients to set up those collaborations—a claim later backed up by a government investigation.
So, he’s not a regulation-follower himself …
Suleyman left DeepMind and moved to Google to lead a team working on AI policy. In 2022 he founded Inflection backed by $1.5 billion of investment from Microsoft, Nvidia, Bill Gates, and LinkedIn founder Reid Hoffman. Earlier this year he released a ChatGPT rival called Pi, whose unique selling point (according to Suleyman) is that it is pleasant and polite.
Excellent … for those living in cloud-cuckoo-land …
I think it’s possible to build AIs that truly reflect our best collective selves and will ultimately make better trade-offs, more consistently and more fairly, on our behalf.
He might think that but others will think otherwise …
Now we have models like Pi, for example, which are unbelievably controllable. You can’t get Pi to produce racist, homophobic, sexist—any kind of toxic stuff. You can’t get it to coach you to produce a biological or chemical weapon or to endorse your desire to go and throw a brick through your neighbor’s window. You can’t do it—
Those with malicious aforethought will just turn to another model … then
Maybe. But I just checked out Claude 2, Anthropic’s bot, the one in the OP. It also claims that its bot does not produce harmful content.
Will there be models that produce malicious content? Of course.
But it seems that the big dogs in the industry who want to monetize their AI know that a model has limited commercial viability if it spits out harmful content. I’m the past, they didn’t know how to filter for this. It looks like there’s some way to do it now.
I’m curious to see how the tone of the chatbot differs between them. Claude 2, Google Bard and ChatGPT. The chatbots I’ve worked with all have different personalities and tones. Each of them requires registration so it might take me a while to check it out.
But what are the Russians and Chinese doing … and the US military and CIA …
Any filter that’s applied can be bypassed by the knowledgeable and persistent …
AI is not just chatbots … I’ve been using AI as a music creation tool - it can not only create original music but also mimic specific artists/bands and produce “new” songs …
AI is already embedded in the fabric of society … and it’s learning more about us as we speak:
Governments will do what they’ve always done. They have nuclear weapons.
Not saying that AI isn’t a danger. Bill Gates and others have been sounding that alarm for years. But hopefully, even governments realize that destroying everyone on the planet isn’t a good plan.
It’s not governments driving the research and implementation - it’s the “big boys” in the market place - but their achievements will, inevitably be of interest to both major and minor players in global power politics.
Nuclear weapons will become redundant when AI has the power to neutralise without violence.
The United States is the clear leader in AI development, with major tech companies headquartered there leading the charge
The United States has indisputably become the primary hub for artificial intelligence development, with tech giants like Google, Facebook, and Microsoft at the forefront of AI-driven research. As the race to dominate AI grows ever more competitive around the world, companies within the U.S. are exploring new opportunities to strengthen their foothold in the industry through acquisitions, sharing deals and internal advances. Their goal: to become a major player in an industry that is expected to reach upwards of $118 billion by 2025. While competitors in China and other parts of the world are set on challenging US dominance, U.S.-based firms continue to push forward with cutting-edge initiatives that position them as AI leaders for years to come.
China is a close second, with the government investing heavily in AI research and development
While the USA is currently leading the AI Arms race, China is quickly becoming a close second. In fact, the government of China has been investing heavily in AI research and development, thus taking steps to try and overtake the USA in this new technological race. Major corporations including Alibaba, Baidu and Tencent are all actively involved in pushing China’s AI capabilities to new heights, with many of their efforts creating ground-breaking results that have pushed the boundaries of AI like never before. Despite this investment and dedicated work, it remains to be seen whether or not China will eventually bridge the gap between itself and the USA in terms of AI prowess; only time will tell in this ever-changing field of technology.
Other countries like Canada, Japan, and South Korea are also making significant strides in AI technology
With China and the US duking it out to become the top AI superpower, other countries such as Canada, Japan, and South Korea have been quietly forging ahead with their own initiatives in AI technology. Canada introduced an AI Strategy backed by $125 million to promote research and develop strong talent advances in 2018. Japan has also stepped up its game recently with their “Society 5.0” plan which incorporates AI elements into a new vision for national development. South Korea has vowed to become an AI powerhouse through collaboration between the public and private sectors by allocating 14 trillion won for the purpose of bolstering their national competitiveness in the field.
Europe as a whole is lagging behind in AI development, but individual countries like France and Germany are starting to catch up
Despite Europe as a whole lagging behind in Artificial Intelligence (AI) development, individual countries such as France and Germany are fighting hard to be competitive in the growing AI race. In recent years, both countries have announced plans to invest billions of dollars into the research and development of AI technologies. They seek to build the foundation for better opportunities in the areas of social services, transportation and many other fields that rely heavily on AI innovation.
The UK just does not have the financial clout to go up against US tech industry leaders or their Chinese competitors.