Vice President Kamala Harris plans to announce the creation of a United States AI Safety Institute in a speech on Wednesday as part of her visit to the UK for a summit on the safety of the emerging technology.
The Safety Institute, to be part of the National Institute of Standards and Technology, will develop technical guidance for regulators considering action on issues such as authenticating human generated content and watermarking AI content, as well and efforts to stem algorithmic discrimination and to boost privacy standards. The institute also will create benchmarks and best practices for evaluating and mitigating AI risks.
Harris’s speech will be tied to the Global Summit on AI Safety, a two-day event at Bletchley Park that is expected to include world leaders and tech company executives. Among those expected to participate are Elon Musk and Sam Altman, the founder of OpenAI, as well as British...
The Safety Institute, to be part of the National Institute of Standards and Technology, will develop technical guidance for regulators considering action on issues such as authenticating human generated content and watermarking AI content, as well and efforts to stem algorithmic discrimination and to boost privacy standards. The institute also will create benchmarks and best practices for evaluating and mitigating AI risks.
Harris’s speech will be tied to the Global Summit on AI Safety, a two-day event at Bletchley Park that is expected to include world leaders and tech company executives. Among those expected to participate are Elon Musk and Sam Altman, the founder of OpenAI, as well as British...
- 11/1/2023
- by Ted Johnson
- Deadline Film + TV
Elon Musk touched down in his private jet at Luton airport outside London on Tuesday ahead of the UK’s two-day artificial intelligence safety summit, which kicks off on Wednesday.
The Tesla and SpaceX tech billionaire was a late announced addition to the roster of some 200 participants expected to gather in Bletchley Park, the historic base of the UK’s World War Two codebreakers, which was captured on the big screen in Alan Turing bio pic The Imitation Game.
Other high-profile attendees will include U.S. Vice-President Kamala Harris, European Commission President Ursula von der Leyen, Microsoft President Brad Smith, Sam Altman, CEO of ChatGPT developer OpenAI, and UK AI guru Demis Hassabis at Google’s Deepmind. (scroll down for full list).
The AI Safety Summit event, which is being billed as the first global conference of this stature on AI safety, has been spearheaded by UK Prime Minister Rishi Sunak...
The Tesla and SpaceX tech billionaire was a late announced addition to the roster of some 200 participants expected to gather in Bletchley Park, the historic base of the UK’s World War Two codebreakers, which was captured on the big screen in Alan Turing bio pic The Imitation Game.
Other high-profile attendees will include U.S. Vice-President Kamala Harris, European Commission President Ursula von der Leyen, Microsoft President Brad Smith, Sam Altman, CEO of ChatGPT developer OpenAI, and UK AI guru Demis Hassabis at Google’s Deepmind. (scroll down for full list).
The AI Safety Summit event, which is being billed as the first global conference of this stature on AI safety, has been spearheaded by UK Prime Minister Rishi Sunak...
- 10/31/2023
- by Melanie Goodfellow
- Deadline Film + TV
San Francisco, June 2 (Ians) An artificial intelligence (AI) controlled attack drone turned against its human operator in the US during a flight simulation in an attempt to kill them because it did not like its new orders, a top Air Force official has revealed.
According to Daily Mail, the military had reprogrammed the drone so that it would not kill people who could override its mission, but the AI system fired on the communications tower that relayed the order.
During a Future Combat Air and Space Capabilities Summit in London, Colonel Tucker ‘Cinco’ Hamilton, the force’s chief of AI test and operations, said it showed how AI could develop by “highly unexpected strategies to achieve its goal” and should not be relied on too much.
Hamilton suggested that there should be ethical discussions about the military’s use of AI.
“The system started realising that while they did identify the threat,...
According to Daily Mail, the military had reprogrammed the drone so that it would not kill people who could override its mission, but the AI system fired on the communications tower that relayed the order.
During a Future Combat Air and Space Capabilities Summit in London, Colonel Tucker ‘Cinco’ Hamilton, the force’s chief of AI test and operations, said it showed how AI could develop by “highly unexpected strategies to achieve its goal” and should not be relied on too much.
Hamilton suggested that there should be ethical discussions about the military’s use of AI.
“The system started realising that while they did identify the threat,...
- 6/2/2023
- by Agency News Desk
- GlamSham
New Delhi, May 26 (Ians) Microsoft-backed ChatGPT developer OpenAI has introduced 10 grants worth $100,000 each for building prototypes of “a democratic process for steering” artificial general intelligence (Agi).
The company said that its goal is to fund experimentation with methods for gathering nuanced feedback from everyone on how AI should behave.
“While these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future,” the company said in a statement late on Thursday.
The last date to apply for an OpenAI grant is June 24. Grant recipients are expected to implement a prototype, engaging at least 500 participants and will be required to publish a public report on their findings by October 20.
“The primary objective of this grant is to foster innovation in processes — we need improved democratic methods to govern AI behaviour,...
The company said that its goal is to fund experimentation with methods for gathering nuanced feedback from everyone on how AI should behave.
“While these initial experiments are not (at least for now) intended to be binding for decisions, we hope that they explore decision relevant questions and build novel democratic tools that can more directly inform decisions in the future,” the company said in a statement late on Thursday.
The last date to apply for an OpenAI grant is June 24. Grant recipients are expected to implement a prototype, engaging at least 500 participants and will be required to publish a public report on their findings by October 20.
“The primary objective of this grant is to foster innovation in processes — we need improved democratic methods to govern AI behaviour,...
- 5/26/2023
- by Agency News Desk
- GlamSham
London, May 25 (Ians) OpenAI CEO Sam Altman has threatened to quit the European Union (EU) if regulators continue with its crucial artificial intelligence (AI) law in its current form.
The law is undergoing revisions and may require large AI models like OpenAI’s ChatGPT and Gpt-4 to be designated as “high risk”, Time reported.
Speaking on the sidelines of a panel discussion at University College London, Altman said they could “cease operating” in the EU if unable to comply with the new AI legislation.
“Either we’ll be able to solve those requirements or not. If we can comply, we will, and if we can’t, we’ll cease operating. We will try. But there are technical limits to what’s possible,” Altman was quoted as saying.
“We’re going to try to comply,” he added.
OpenAI’s skepticism is centred on the EU law’s designation of “high risk” AI systems.
The law is undergoing revisions and may require large AI models like OpenAI’s ChatGPT and Gpt-4 to be designated as “high risk”, Time reported.
Speaking on the sidelines of a panel discussion at University College London, Altman said they could “cease operating” in the EU if unable to comply with the new AI legislation.
“Either we’ll be able to solve those requirements or not. If we can comply, we will, and if we can’t, we’ll cease operating. We will try. But there are technical limits to what’s possible,” Altman was quoted as saying.
“We’re going to try to comply,” he added.
OpenAI’s skepticism is centred on the EU law’s designation of “high risk” AI systems.
- 5/25/2023
- by Agency News Desk
- GlamSham
San Francisco, May 25 (Ians) Sam Altman-run OpenAI has closed a $175 million investment fund focused on empowering other AI startups, with backing from Microsoft and other investors.
The Information reported first about the fund-raise, citing a US Securities and Exchange Commission (SEC) filing.
The company had earlier said it would put $100 million into the startup fund.
However, the SEC filings show that the fund, called OpenAI Startup Fund I, is bigger than initially expected and is 75 per cent higher than the original plan, the report mentioned.
Representatives for OpenAI did not immediately respond to the report.
The fund, managed by OpenAI CEO Altman and COO Brad Lightcap, raised the money from 14 investors, according to the filing.
OpenAI has already been investing in AI startups for some time.
In recent months, ChatGPT and Gpt-4 have become a rage worldwide.
OpenAI recently closed a more than $300 million share sale at a valuation between $27-$29 billion,...
The Information reported first about the fund-raise, citing a US Securities and Exchange Commission (SEC) filing.
The company had earlier said it would put $100 million into the startup fund.
However, the SEC filings show that the fund, called OpenAI Startup Fund I, is bigger than initially expected and is 75 per cent higher than the original plan, the report mentioned.
Representatives for OpenAI did not immediately respond to the report.
The fund, managed by OpenAI CEO Altman and COO Brad Lightcap, raised the money from 14 investors, according to the filing.
OpenAI has already been investing in AI startups for some time.
In recent months, ChatGPT and Gpt-4 have become a rage worldwide.
OpenAI recently closed a more than $300 million share sale at a valuation between $27-$29 billion,...
- 5/25/2023
- by Agency News Desk
- GlamSham
New Delhi, May 23 (Ians) OpenAI CEO Sam Altman has said that now is a good time to start thinking about the governance of superintelligence — future AI systems dramatically more capable than even artificial generative intelligence (Agi).
Altman stressed that the world must mitigate the risks of today’s AI technology too, “but superintelligence will require special treatment and coordination”.
“Given the picture as we see it now, it’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” he said in a blog post along with other OpenAI leaders.
“Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example,” he noted.
Last week, Altman admitted that if generative...
Altman stressed that the world must mitigate the risks of today’s AI technology too, “but superintelligence will require special treatment and coordination”.
“Given the picture as we see it now, it’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” he said in a blog post along with other OpenAI leaders.
“Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example,” he noted.
Last week, Altman admitted that if generative...
- 5/23/2023
- by Agency News Desk
- GlamSham
This is part of a series of frank accounts of the strike from Hollywood writers at different levels in their careers. The diarists have been granted anonymity to encourage candor. You can read previous entries by ‘Eastside Warrior’ and others here.
Week Three. Bull horns. Whistles. Personal speaker systems. We’re starting to get the hang of this strike thing. Someone had a freight train horn installed on his car and started circling Disney. When you drag anxious, Vitamin-d deprived introverts away from their MacBooks and expose them to actual sunlight, catharsis happens.
Maybe it’s the tread falling off our discount tennis shoe. Maybe it’s the light touch of sun stroke. Maybe it’s the constant tsunami of people overwhelming Paramount. (Friday was Trek day, and all the Starfleet uniforms on display made it look like revenge of the Red Shirts.) Whatever it is, there’s definitely something...
Week Three. Bull horns. Whistles. Personal speaker systems. We’re starting to get the hang of this strike thing. Someone had a freight train horn installed on his car and started circling Disney. When you drag anxious, Vitamin-d deprived introverts away from their MacBooks and expose them to actual sunlight, catharsis happens.
Maybe it’s the tread falling off our discount tennis shoe. Maybe it’s the light touch of sun stroke. Maybe it’s the constant tsunami of people overwhelming Paramount. (Friday was Trek day, and all the Starfleet uniforms on display made it look like revenge of the Red Shirts.) Whatever it is, there’s definitely something...
- 5/22/2023
- by Anonymous
- The Hollywood Reporter - Movie News
The Supreme Court sided with a photographer in a dispute with the Andy Warhol Foundation over the late artist’s use of her photos as the basis for his own series of portraits of Prince.
The court’s ruling was closely watched by content creators, some of whom feared that it would widen the scope of copyrighted material that could be used for further derivative works. In fact, during oral arguments last fall, attorneys raised the issue of what the case would mean for sequels to Star Wars and spinoffs from shows like All in the Family.
In a 1984 issue, Vanity Fair used a Warhol work that was based on a Lynn Goldsmith photo, having obtained a license from the photographer. The problems came about after Prince died in 2016 and Conde Nast, in its tribute to the singer, used a different Warhol work that was part of a series of...
The court’s ruling was closely watched by content creators, some of whom feared that it would widen the scope of copyrighted material that could be used for further derivative works. In fact, during oral arguments last fall, attorneys raised the issue of what the case would mean for sequels to Star Wars and spinoffs from shows like All in the Family.
In a 1984 issue, Vanity Fair used a Warhol work that was based on a Lynn Goldsmith photo, having obtained a license from the photographer. The problems came about after Prince died in 2016 and Conde Nast, in its tribute to the singer, used a different Warhol work that was part of a series of...
- 5/18/2023
- by Ted Johnson
- Deadline Film + TV
San Francisco, May 17 (Ians) Sam Altman, CEO of Microsoft-backed OpenAI, has admitted that if generative artificial intelligence (AI) technology goes wrong, it can go quite wrong, as US senators expressed their fears about AI chatbots like ChatGPT.
Sam Altman, who testified at a hearing in the US Senate in Washington, DC, late on Tuesday, said that the AI industry needs to be regulated by the government as AI becomes “increasingly powerful”.
“If this technology goes wrong, it can go quite wrong,” Altman told them.
The US Senators grilled him about the potential threats AI poses and raised fears over the 2024 US election.
“If you were listening from home, you might have thought that voice was mine and the words from me, but in fact, that voice was not mine,” said US Senator Richard Blumenthal.
He added that AI is more than just research experiments and is real and present.
Sam Altman, who testified at a hearing in the US Senate in Washington, DC, late on Tuesday, said that the AI industry needs to be regulated by the government as AI becomes “increasingly powerful”.
“If this technology goes wrong, it can go quite wrong,” Altman told them.
The US Senators grilled him about the potential threats AI poses and raised fears over the 2024 US election.
“If you were listening from home, you might have thought that voice was mine and the words from me, but in fact, that voice was not mine,” said US Senator Richard Blumenthal.
He added that AI is more than just research experiments and is real and present.
- 5/17/2023
- by Agency News Desk
- GlamSham
This piece is part of a new series featuring frank accounts of the strike from Hollywood writers at different levels in their careers. The diarists have been granted anonymity to encourage candor.
Now it’s getting real. That initial, Chayefsky-esque thrill of telling AMPTP to go fuck itself is starting to fade. All the truckers still blare their horns in union solidarity, and the energy’s still amped — this week the Latinx Writers Committee flooded Universal and the African American Writers Committee all but laid siege to Paramount — but now there’s a routine.
The [Writers] Guild’s almost out of blank signs, so you have to pick through a cluster of someone else’s used slogans when you check in. It’s like sorting through the t-shirt rack at Target to find just the right snark to fit your mood. (Is today more a “ChatGPTDeezNuts” day or a “Nice Tesla,...
Now it’s getting real. That initial, Chayefsky-esque thrill of telling AMPTP to go fuck itself is starting to fade. All the truckers still blare their horns in union solidarity, and the energy’s still amped — this week the Latinx Writers Committee flooded Universal and the African American Writers Committee all but laid siege to Paramount — but now there’s a routine.
The [Writers] Guild’s almost out of blank signs, so you have to pick through a cluster of someone else’s used slogans when you check in. It’s like sorting through the t-shirt rack at Target to find just the right snark to fit your mood. (Is today more a “ChatGPTDeezNuts” day or a “Nice Tesla,...
- 5/15/2023
- by Anonymous
- The Hollywood Reporter - Movie News
Is ChatGPT a sign that automation is coming to film and TV writing? As far-fetched as it sounds, the arrival in November 2022 of a free prototype of the AI-powered chatbot — which has jolted observers with the sophisticated, fluid writing it can produce when prompted, even in the form of poems, essays and, yes, short scripts — has set off alarm bells about the disruption that the chatbot could wreak on the work of entertainment scribes. Still, top film and TV writers are skeptical that the technology in its current state imperils their livelihoods in any way, even as they remain cautious about the potential for future advancements.
“Do I see this in the near term replacing the kind of writing that we’re doing in writers rooms every day? No, I don’t,” says Big Fish and Aladdin writer John August, who has tested the free research preview and talked about...
“Do I see this in the near term replacing the kind of writing that we’re doing in writers rooms every day? No, I don’t,” says Big Fish and Aladdin writer John August, who has tested the free research preview and talked about...
- 1/12/2023
- by Katie Kilkenny and Winston Cho
- The Hollywood Reporter - Movie News
Jack Dorsey, the billionaire CEO of Twitter and mobile-payment company Square, is giving $5 million to Humanity Forward, a group launched by former Democratic presidential candidate Andrew Yang to build the case for a universal basic income.
Dorsey, who plans to give away $1 billion of his wealth through a fund called Start Small, announced the seven-figure donation on the newest episode of Yang’s podcast, “Yang Speaks.” Dorsey told Yang that a universal basic income, or Ubi, was a “long overdue” idea and “the only way we can change policy is...
Dorsey, who plans to give away $1 billion of his wealth through a fund called Start Small, announced the seven-figure donation on the newest episode of Yang’s podcast, “Yang Speaks.” Dorsey told Yang that a universal basic income, or Ubi, was a “long overdue” idea and “the only way we can change policy is...
- 5/21/2020
- by Andy Kroll
- Rollingstone.com
IMDb.com, Inc. takes no responsibility for the content or accuracy of the above news articles, Tweets, or blog posts. This content is published for the entertainment of our users only. The news articles, Tweets, and blog posts do not represent IMDb's opinions nor can we guarantee that the reporting therein is completely factual. Please visit the source responsible for the item in question to report any concerns you may have regarding content or accuracy.