This week in AI – 2023 – December – week: 51

From Google’s VideoPoet to OpenAI’s Preparedness Framework, the AI landscape is rapidly evolving. Generative AI’s impact on the upcoming US elections and ethical concerns around AI life prediction are at the forefront. With partnerships and funding announcements, the industry is forging ahead, while grappling with the challenges of AI governance and security.

 

Google Unveils VideoPoet: Impressive AI Video Generator Amazes Researchers

Google has unveiled VideoPoet, a new multimodal AI video generator that has impressed researchers with its capabilities. Unlike existing models, VideoPoet uses a large language model (LLM) based on the transformer architecture, trained to generate videos. The results are impressive, with the ability to produce longer, higher quality clips with more consistent motion. Viewers have shown a preference for VideoPoet over other models, and it has been tailored to produce videos in portrait orientation by default. However, it is not yet available for public usage.

 

Generative AI Threatens Chaos in 2024 US Elections

Experts warn that generative AI will create chaos in the 2024 US elections, with chatbots and deepfakes playing a significant role. The use of AI in political campaigns is already raising concerns, with fears of increased spread of false information. Some Big Tech companies are taking steps to address these concerns, but the potential impact of AI on the democratic fabric is a serious issue, according to legal experts.

 

Jaxon AI and IBM Watsonx Partner to Combat AI Hallucination with DSAIL Technology

Jaxon AI has partnered with IBM watsonx to combat AI hallucination using its Domain-Specific AI Language (DSAIL) technology. This technology aims to address inaccuracies in large language models by incorporating IBM watsonx foundation models and implementing a novel approach to developing more reliable AI solutions. DSAIL works to minimize the risk of AI hallucination by converting natural language inputs into a binary language format and subjecting them to rigorous checks. Additionally, IBM is actively working to embed watsonx in software vendor tools, aiming to provide organizations with reliable and trusted AI foundation models.

 

Ludo.ai Launches Text-to-Video Generator Tool for Game Developers

Ludo.ai has unveiled a text-to-video generator tool for game developers, allowing them to create gameplay videos in seconds. The tool aims to simplify the ideation and creation process, offering a realistic glimpse of games in action. With the company’s focus on enhancing productivity and driving creativity, the tool is expected to be a game-changer in the industry. Ludo.ai’s CEO, Tom Pigott, expressed optimism about the tool’s impact and the company’s goal to be the go-to platform for small and medium-size studios.

 

OpenAI Introduces “Preparedness Framework” for Monitoring AI Dangers

OpenAI has unveiled its “Preparedness Framework” to monitor and manage the potential dangers of powerful AI models. The framework includes risk “scorecards” to track harm indicators and emphasizes rigorous evaluations and forecasts of AI capabilities and risks. This announcement follows criticism of OpenAI’s governance and accountability, prompting the lab to address concerns about responsible and ethical AI development. The framework contrasts with Anthropic’s more formal and prescriptive approach, signaling a significant step forward for AI safety.

 

AI-powered Sales and Marketing Platform, Ignition, Secures $8M Funding to Revolutionize Enterprise Software

Ignition, an AI-powered sales and marketing platform, has secured $8 million in funding to unify product, marketing, and sales workflows, boost revenue, and disrupt enterprise software. The San Francisco startup aims to address misalignment across go-to-market functions, using AI and workflow automation to untangle dysfunctional cross-department dynamics. With over 2,500 users at brands like Square and Uberflip, Ignition is set to unveil enhanced AI features and product enhancements in 2024. Investors have expressed confidence in Ignition’s potential to become a category-defining player in the world of SaaS.

 

Industry Leaders Share Insights at Exclusive Generative AI Governance Event

An exclusive event for executives will delve into the five key ingredients for creating an effective generative AI governance blueprint for organizations. The event, limited to 75 attendees, will feature insights from industry leaders and experts in generative AI governance, including Wells Fargo CIO Chintan Mehta and Silen Naihin, a founding engineer of AutoGPT. Attendees will have the opportunity to engage with the speakers and network with other industry professionals. This event aims to provide valuable insights and opportunities to stay ahead in the realm of generative AI governance.

 

AI Life Predictor Raises Ethical Concerns

Artificial intelligence developed to model written language can be utilized to predict events in people’s lives. A research project from DTU, University of Copenhagen, ITU, and Northeastern University in the US shows that if you use large amounts of data about people’s lives and train so-called ‘transformer models’, they can systematically organize the data and predict what will happen in a person’s life and even estimate the time of death. The predictions from Life2vec are answers to general questions such as: ‘death within four years’? When the researchers analyze the model’s responses, the results are consistent with existing findings within the social sciences; for example, all things being equal, individuals in a leadership position or with a high income are more likely to survive, while being male, skilled or having a mental diagnosis is associated with a higher risk of dying. The researchers behind the article point out that ethical questions surround the life2vec model, such as protecting sensitive data, privacy, and the role of bias in data. According to the researchers, the next step would be to incorporate other types of information, such as text and images or information about our social connections.

 

Fable’s SAGA: Open-Sourced AI Tool Empowers Developers for Westworld-like Simulations

The Simulation by Fable has open sourced its new AI tool, SAGA, for creating Westworld-like simulations featuring AI characters. This tool aims to empower developers in crafting immersive simulations where AI characters are the main actors within games. The company’s long-term goal is to train embodied AIs within simulations to foster an intelligent community capable of venturing beyond simulated realms and into the internet as peers. The open-sourced SAGA aims to spur innovation and experimentation within the developer community, pushing the boundaries of AI-driven simulations.

 

Protecting AI Model Weights: Balancing Security and Innovation

AI model companies like Anthropic and OpenAI are deeply concerned about protecting the weights of their sophisticated and powerful language models. The model weights, which are crucial for the model’s performance, are at risk of being accessed by malicious actors, posing serious security threats. While some experts advocate for open-source models, others emphasize the need for stringent security measures to prevent unauthorized access to these valuable assets. As the debate continues, the industry grapples with the challenge of balancing innovation and research with the imperative to safeguard model weights from potential exploitation.

 

Facebook
Twitter
LinkedIn

Book our New and Next Technology Workshop!

A one day exploration and experience of cutting-edge technology that’s shaping the future.
Get inspired, gain new insights and get hands-on experience with new technologies. Lay the foundation for determining a strategy, developing new products and introducing new tools to your company.

You might also like