Content Library
Discover the latest trends and insights in the legal industry. Learn more about Generative AI, Delivering Legal Services and Strategy and Scale in a Complex World.
Get access or login to the Private Content Library
Content Library
Discover the latest trends and insights in the legal industry. Learn more about Generative AI, Delivering Legal Services and Strategy and Scale in a Complex World.
Get access to the Full Content Library
Content Library
This resource originated as a primary-source appendix for presentations on Generative AI we began delivering in December 2022. We are still going (700 slides and counting). We offer introductory presentations for larger audiences still getting up to speed. We partner with law firms on CLEs that address the many legal and regulatory considerations that add to complexity and impact strategy. We lead high-level conversations among informed stakeholders re real-world use cases. Feel free to reach out.
As the presentation has grown, so has the appendix, to the point where we’ve added both supplemental material (that is not in the presentation) and maintained historical materials (fallen out of the presentation given how quickly the topic moves). The resource is comprehensive in-so-far as it contains a primary source for every slide in the presentation. But we think a truly comprehensive resource that encompasses the vast and rapidly evolving subject of Generative AI would be impossible. We’re giving you a lot. But we would never pretend to be offering everything. Though if you see a glaring hole, feel free to reach out. We’re always looking to make this resource more valuable.
This is all completely free.
Yet we are consistently asked how LexFusion makes money. If you are not paying for the product, you are the product has become common sense. Fair enough.
We accelerate legal innovation. LexFusion curates promising legal innovation companies. We invest. They also pay us. We support strategy, product, sales, marketing, events, etc. For example, Macro joined LexFusion a year before their $9.3m seed round, led by Andreesen Horowitz. Similarly, Casetext joined LexFusion two years before their $650m cash acquisition by Thomson Reuters. While the primary credit always belongs to the founders and their teams (we identify and then accelerate winners), they, like our other members, will enthusiastically confirm that LexFusion played a material role in rapidly advancing their products and business.
Much of our value to our legal-innovation clients is premised on our unparalleled market listening. We frequently provide free presentations and consultations, absent any sales agenda, to law departments and law firms to foster conversations that augment our market insights. This is where the confusion sets in. Our customers (law depts/firms) are distinct from our clients (legal innovation companies). Because our customers don’t pay us, they want to know where the money comes from.
We regularly meet with ~500 law departments and ~300 law firms. LexFusion is, ultimately but not directly, compensated based on the value we derive from these interactions. For our business to be sustainable, the exchange of value must merit their scarce time over repeat interactions—hence the free content and consults. If you know us, you probably stopped reading already. If you don’t know us, we hope the depth of this free content and some testimonials from our friends are sufficient to establish are bona fides.
To the extent you are interested in an even deeper dive, Bill Henderson wrote a wonderful longform piece on our business model, which we followed up with an even longer piece on the centrality of trust to the LexFusion value proposition. We have yet to perfect our elevator pitch. But we do our best to always be transparent. Without building and maintaining trust, our business model crumbles. Our legal-innovation clients will cycle through—inherent in our model. Our sole enduring asset is our relationships with our customers. We are people centric, and live our motto “better together!”
Long heading is what you see here in this feature section
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla.
Caught Our Attention
Cyberattacks are hindering mergers and acquisitions (M&A) deals, according to a recent firm report. The article discusses various legal cases involving companies like Symbotic Inc., MongoDB, Epic Systems Corp., and Sunrun Installation Services, highlighting shareholder derivative lawsuits and employment discrimination claims. The report emphasizes the growing impact of cybersecurity threats on corporate transactions and legal proceedings.
In-house attorneys are increasingly leaving their positions due to overwhelming workloads and limited advancement opportunities. The article highlights several ongoing shareholder derivative lawsuits involving companies like Symbotic Inc., MongoDB, Epic Systems Corp., Sunrun Installation Services, and boohoo.com [http://boohoo.com/] UK Ltd., detailing the legal representation and allegations against these companies.
New merger-review process may impede some deals, introducing subjectivity and complications. Law firms Debevoise & Plimpton and Paul, Weiss, Rifkind, Wharton & Garrison are mentioned. The article discusses potential challenges and implications for mergers in the legal landscape.
A Florida mother has sued the parent company of Character.AI [http://character.ai/], claiming the chatbot app contributed to her 14-year-old son's suicide. The lawsuit argues that AI technologies are more addictive than social media. Character.AI [http://character.ai/] has implemented safety measures, including a pop-up directing users to the National Suicide Prevention Lifeline. The case faces legal challenges but raises concerns about AI's impact on mental health.
The US Treasury used AI to recover $4 billion in fraud and improper payments in the 2024 fiscal year, a significant increase from $652.7 million the previous year. Implementing machine learning since late 2022, the Treasury analyzes data to detect fraud. With 1.4 billion payments valued at $6.9 trillion annually, the agency plans to expand AI use for regulatory efforts against financial crimes. Other government departments, like the IRS, are also adopting AI for fraud detection.
General Counsel are struggling to understand the unpredictable nature of law firm rate hikes. The article discusses various legal cases involving trademark infringement and securities class actions, highlighting the challenges legal departments face in managing costs amid rising rates.
The article presents the Swiss Leading Decision Summarization (SLDS) dataset, containing 18,000 court rulings from the Swiss Federal Supreme Court in German, French, and Italian, with German headnotes. It addresses the challenges of legal research and the need for automated headnote creation. The study evaluates various mT5 models, showing that while proprietary models excel in some areas, fine-tuned smaller models are competitive. The dataset is publicly released to support multilingual legal summarization and assistive technology development for legal professionals.
Researchers at Penn State developed an electronic tongue using AI to identify differences in similar liquids, achieving over 95% accuracy. The device assesses quality, authenticity, and freshness, with applications in food safety and medical diagnostics. The AI defines its own assessment parameters, improving decision-making. The research, supported by NASA, highlights the potential for broad applications across industries due to the sensor's robustness and practicality.
Wimbledon will implement AI technology for line calling in 2025, replacing human umpires for the first time in 147 years. The Live Electronic Line Calling (Live ELC) system, tested successfully in 2024, uses 12 cameras and microphones to track the ball. This decision aims for maximum accuracy while balancing tradition and innovation. Additionally, AI-powered commentary and analysis were introduced in 2023, developed in partnership with IBM, enhancing the tournament's coverage.
The DOJ is considering breaking up Google after a ruling that it holds a monopoly in the search market. Recommendations include limiting Google's use of products like Chrome and Android to favor its search services and prohibiting certain agreements with Apple and Samsung. Google plans to appeal the ruling, with a decision on remedies expected by August 2025. The DOJ also suggested making search data available to competitors. Google holds 90% of the search market share, generating $48.5 billion in revenue.
Generative AI is making legal actions cheaper and easier, increasing the likelihood of legal challenges from various stakeholders. This shift, combined with a turbulent geopolitical landscape, raises legal risks that resemble mass-produced actions similar to phishing attacks. Companies must adopt cybersecurity strategies to understand vulnerabilities, emerging threats, and develop risk-mitigation and communication plans for stakeholders.
California law AB 2876 mandates AI literacy in K-12 education, updating curricula to include AI principles, applications, limitations, and ethics. Authored by Assemblymember Marc Berman and signed by Governor Gavin Newsom, the law aims to prepare students for an AI-driven future. The California Chamber of Commerce co-sponsored the bill, emphasizing the need for AI skills. Other recent AI-related laws include expanding child pornography laws for AI-generated content and ensuring transparency in AI use in healthcare.
OpenAI is now valued at $157 billion after raising $6.6 billion in a funding round led by Thrive Capital. Major contributions came from Microsoft, SoftBank, Nvidia, and UAE-based MGX. Investors can withdraw funds if OpenAI fails to convert to a fully for-profit company. Apple considered investing but did not participate. OpenAI recently hired its first CFO, Sarah Friar, who helped organize the funding.
The article discusses 12 high-profile AI disasters, highlighting failures in various industries. Examples include Air Canada's chatbot misinformation, Sports Illustrated's use of AI-generated writers, and Gannett's flawed sports articles. Other issues involve discriminatory hiring practices by iTutor Group and Amazon, as well as inaccuracies in healthcare algorithms. The piece emphasizes the risks and consequences of relying on AI and machine learning without proper oversight and understanding.
The article discusses how AI enhances legal services for low-income individuals through two case studies: The California Innocence Project, which uses AI for efficient case reviews, and Housing Court Answers, which employs AI tools to assist tenants with legal issues. Both examples demonstrate AI's potential to improve access to justice and streamline operations in legal aid organizations, emphasizing a human-centered approach and the importance of continuous improvement in AI applications.
OpenAI raised $6.6 billion in the largest venture capital round ever, valuing the company at $157 billion. The round was led by Thrive Capital, with participation from Microsoft, Nvidia, and others. OpenAI is transitioning to a for-profit structure, leading to internal culture clashes and the departure of key employees. The funds will enhance AI research, increase compute capacity, and develop new tools while maintaining a commitment to safety. The investment reflects growing revenue and rising costs.
Rushing for AI ROI can lead to poor investments. Organizations often expect quick returns, but successful AI projects require time and careful planning. Many struggle to find measurable ROI, and targeted AI solutions show more promise. Fear of missing out drives hasty decisions, leading to project failures. Experts recommend starting small and cost-effectively, with realistic expectations for ROI, typically taking 18 to 24 months. A methodical approach is essential for achieving long-term success with AI.
McDermott Will & Emery and Akin have appointed directors of AI, Jeff Westcott and Christopher Cyrus, respectively, to enhance client services through technology. Westcott will focus on AI and emerging technologies at Akin, while Cyrus will lead AI innovation at McDermott. Both firms aim to integrate AI solutions for improved operational efficiency. Other firms like Reed Smith and Latham & Watkins have also made similar hires.
The U.S. Army is testing AI-enabled robot dogs armed with rifles in the Middle East for counter-drone capabilities. These quadrupedal unmanned ground vehicles are being evaluated in Saudi Arabia as part of efforts to explore autonomous systems for military applications. The Pentagon aims to develop cost-effective solutions to rising drone threats, viewing these robot dogs as alternatives to expensive missile systems. The Army's experiments are part of broader initiatives to integrate human-machine capabilities in future combat scenarios.
Opinion 512 addresses the use of generative AI tools by US law firms, emphasizing the need for informed client consent before inputting confidential information. It outlines the risks and benefits of these tools, focusing on self-learning capabilities and the importance of clear communication with clients. The article discusses the implications for legal ethics and the potential impact on generative AI adoption in law firms, while the authors, associated with Zuva, clarify their position on the opinion's relevance to their technology.
California has enacted 18 new AI laws addressing issues like AI risk, training data transparency, privacy, and education. Key laws include SB 896 for risk analysis, AB 2013 for training data disclosure, and regulations on deepfakes and AI in healthcare. Governor Newsom vetoed the controversial SB 1047, advocating for a flexible regulatory approach. The laws aim to manage AI's impact on society while ensuring safety and ethical use.
The article discusses the limitations of Lexis+AI in administrative law research, highlighting its ineffectiveness for complex queries and incomplete access to regulatory materials. It emphasizes the need for critical evaluation of GAI tools by legal researchers and suggests that traditional research methods remain essential, as GAI cannot replace the nuanced work of administrative attorneys.
CISOs must critically assess AI integration in cybersecurity by asking three essential questions: where AI can be most effective, whether there is proof of AI success in specific use cases, and the quality of data provided to AI models. While AI has potential to enhance cybersecurity, it requires careful consideration of established applications and high-quality data to avoid flawed results. A strategic approach is vital for leveraging AI's capabilities in a rapidly evolving threat landscape.
Archaeologists have discovered 303 new geoglyphs near the Nazca Lines in Peru using AI and drones, nearly doubling the known figures. Dating back to 200BC, these smaller geoglyphs depict animals and humans, providing insights into the transition from the Paracas culture to the Nazca civilization. The use of AI allowed for rapid identification of previously undetectable figures, revolutionizing archaeological research in the area.
Generative AI can significantly outperform human CEOs in strategic decision-making, especially in data-driven tasks like product design and market optimization. An experiment in the automotive industry showcased the potential of AI models in these areas, raising questions about the future role of AI in executive positions.
California Governor Gavin Newsom vetoed a bill imposing safety requirements on AI models, siding with Silicon Valley. He argued the bill was too broad and didn't consider deployment contexts. Instead, he signed a less comprehensive bill for AI risk study and plans to develop future legislation with experts. The veto sparked debate, with proponents seeing it as a missed opportunity for regulation, while opponents, including Google and OpenAI, argued it would burden developers. Newsom's decision reflects his tech-friendly stance amid concerns about AI's impact.
At least 26 U.S. states are regulating generative AI in elections due to concerns over misinformation and voter suppression. Nineteen states have passed laws against deepfakes, while others are considering similar measures. Experts warn that without federal regulations, state efforts will be insufficient to address the evolving challenges posed by AI in political communications.
Security researcher Johann Rehberger discovered a vulnerability in ChatGPT's long-term memory feature, allowing attackers to plant false memories and exfiltrate user data indefinitely. After reporting the issue to OpenAI, which classified it as a safety concern, Rehberger created a proof-of-concept exploit that sent all user input to a malicious server. OpenAI issued a partial fix, but the risk of prompt injection remains, prompting users to monitor and manage their stored memories carefully.
Rising power demand from data centers is revitalizing the US offshore wind industry, according to Ørsted's CEO. The AI boom is shifting perceptions, with tech companies seen as allies in financing clean energy. A notable deal between Microsoft and Constellation Energy to restart the Three Mile Island nuclear plant exemplifies how data center demand can be met sustainably. However, challenges remain, including uranium supply and the need for additional gas turbines, indicating that some emissions may still increase despite the push for clean energy.
An AI model, TxGNN, developed by Harvard Medical School, identifies existing drugs for repurposing to treat over 17,000 rare diseases. It outperforms existing models by nearly 50% in drug candidate identification and provides rationale for its recommendations. The tool aims to close treatment gaps for rare conditions and is available for free to clinicians. The approach leverages known drug safety profiles, potentially accelerating the discovery of new therapies more efficiently than traditional methods.
The article discusses "Privilege Expansion" through AI, which enhances access to services like education, healthcare, and personal styling by making them more affordable and personalized. It highlights how AI can disrupt traditional roles and democratize access to previously expensive services, predicting the rise of new companies leveraging this trend. Examples include AI tutors, healthcare bots, and virtual stylists, emphasizing the potential for AI to transform consumer experiences and reduce barriers to access.
Generative AI is a rapidly emerging technology with significant economic implications tied to its adoption rates. A survey in August 2024 found that 39% of U.S. adults aged 18-64 had used generative AI, with over 24% of workers using it weekly and nearly 11% daily. Its adoption is occurring faster than that of personal computers and the internet, and it is classified as a general-purpose technology applicable across various jobs and tasks.
Microsoft introduced Correction, a tool to revise AI-generated text that may contain errors. While it aims to improve reliability, experts express skepticism about its effectiveness and potential new issues. The tool is part of Microsoft’s Azure AI Content Safety API and can be used with various AI models. Concerns persist about the accuracy of AI outputs and the implications of relying on such technology in critical fields. Microsoft faces pressure to demonstrate the value of its AI investments amid doubts about its long-term strategy.
An AI system has achieved 100% success in solving CAPTCHA tests, which are designed to differentiate humans from bots. Developed by Andreas Plesner and colleagues at ETH Zurich, the AI model, named YOLO, was trained on thousands of images and specifically tackled Google's reCAPTCHAv2 challenges.
Middle Eastern sovereign wealth funds are significantly investing in AI startups, with funding increasing fivefold over the past year. Key players include Saudi Arabia, UAE, Kuwait, and Qatar, with notable investments in OpenAI and partnerships for AI infrastructure. Concerns about human rights issues in Saudi Arabia persist, but the U.S. sees these investments as a geopolitical advantage against rivals like China.
The U.N. advisory body proposed seven recommendations for AI governance, including establishing a panel for reliable scientific knowledge, creating a global AI fund, and forming a global AI data framework. The recommendations aim to address AI-related risks and gaps in governance, especially as AI use grows rapidly, raising concerns about misinformation and control by a few multinational companies.
82% of UK lawyers are adopting AI for faster legal work, a significant increase from 2023. Key benefits include quicker delivery (71%), improved client service (54%), and competitive advantage (53%). Many firms are adjusting pricing structures due to AI. Concerns remain about AI accuracy, but confidence in AI tools linked to reliable legal sources is growing.
Nvidia experienced its largest single-day valuation loss, with stock falling 9.5% due to concerns about AI and the US economy. The article is available exclusively to Business Insider subscribers.
LinkedIn has opted users into training AI models using their data without consent. Users can opt-out via account settings, but this won't affect past data usage. LinkedIn claims to use privacy-enhancing technologies to protect personal data and does not train models on users in the EU, EEA, or Switzerland. This move follows Meta's admission of similar practices.
The document evaluates Retrieval-Augmented Generation (RAG) using Large Language Models (LLMs) and introduces FRAMES, a dataset for assessing LLMs' factuality, retrieval, and reasoning abilities. It highlights that state-of-the-art LLMs achieve only 0.40 accuracy without retrieval, improving to 0.66 with a multi-step retrieval pipeline. The aim is to enhance the evaluation and robustness of RAG systems through a unified framework that addresses integration of information from multiple sources.
A study by Common Sense Media reveals that Black teenagers in the US are twice as likely to be falsely accused of using AI tools for homework compared to their white and Latino peers. The report highlights racial biases in AI detection software and the education system, exacerbating disciplinary disparities among marginalized groups and negatively impacting academic performance.
Google.org [http://google.org/] announces over $25M in funding for five education organizations to enhance AI skills for over 500,000 educators and students in the U.S. Initiatives include AI curriculums and teacher training, aimed at equitable access to AI resources. Organizations involved include ISTE, 4-H, aiEDU, CodePath, and STEM From Dance, focusing on diverse communities and closing educational gaps.
Salesforce is investing over $50 million in free AI training through 2025, offering hands-on courses and certifications via its Trailhead platform. The company will open AI Centers globally, including a pop-up in San Francisco, to help upskill employees and communities. Salesforce aims to address the AI skills gap and has already helped hundreds of thousands develop technical skills.
AI experts are crowdsourcing "Humanity's Last Exam" to challenge advanced AI systems with difficult questions. Organized by the Center for AI Safety and Scale AI, submissions are due by November 1. The exam will focus on abstract reasoning and exclude topics on weapons. Winners will co-author a related paper and can win up to $5,000. The test aims to evaluate AI's capabilities beyond previous benchmarks.
Microsoft and BlackRock are forming a partnership, GAIIP, to raise up to $100 billion for AI data centers and energy infrastructure. The initiative aims to gather $30 billion initially, with a future goal of $100 billion. Other participants include Global Infrastructure Partners and MGX. The partnership focuses on sustainable infrastructure to support AI advancements and address the growing demand for data center capabilities.
Salesforce is shifting its AI strategy, introducing unsupervised AI agents to handle tasks like customer service. The company will charge $2 per conversation, adapting its business model amid potential job losses. CEO Marc Benioff emphasized the technology's ability to expand workforce capacity without hiring. Despite the AI hype, software vendors like Salesforce have seen limited revenue gains compared to hardware makers like Nvidia.
The article surveys trustworthiness in Retrieval-Augmented Generation (RAG) systems, highlighting their potential to enhance Large Language Models (LLMs) while addressing risks of generating undesirable content. It proposes a framework assessing trustworthiness across six dimensions: factuality, robustness, fairness, transparency, accountability, and privacy. The authors review existing literature, create an evaluation benchmark, and identify future research challenges to improve RAG systems' trustworthiness in real-world applications.
A Canadian study found that the AI tool Chartwatch reduced unexpected hospital deaths by 26%. Developed at St. Michael's Hospital, it alerts nurses to patient deterioration by analyzing medical records. The study involved over 13,000 admissions and showed significant improvements in patient outcomes, highlighting the potential of AI in healthcare despite the need for further research and broader implementation.
The article discusses a potential "subprime AI crisis" in the tech industry, where companies heavily invest in generative AI without clear profitability. It highlights concerns over cultural issues within firms like Microsoft, reliance on big tech for cloud services, and the risk of a market collapse as demand for AI features may not sustain, leading to financial instability across the sector.
The article discusses the challenges enterprises face in building data pipelines and AI infrastructure. It outlines phases for getting enterprise-ready for AI, including starting with cloud providers, scaling solutions, and optimizing costs. It emphasizes the importance of continuous data management and the need for robust inference hosting models, while highlighting the competitive advantage of effective data pipelines in AI programs.
Data center emissions from Google, Microsoft, Meta, and Apple may be 662% higher than reported. The rise of AI increases energy demands, complicating emissions transparency. Companies use renewable energy certificates for accounting, leading to discrepancies between reported and actual emissions. Despite claims of carbon neutrality, emissions are projected to rise significantly as data centers' electricity demand doubles by 2030.
EvenUp, a San Francisco-based legal AI startup, is in talks to raise funding at a $1 billion valuation. The company develops AI software for personal injury lawyers to compile claims. Current investor Bain Capital Ventures may lead the round, which would double its previous valuation. EvenUp has raised about $100 million in funding over three rounds in just over a year.
The article critiques current legal tech benchmarks, particularly Paxton AI and BigLaw Bench, for lacking relevance to real legal workflows. It advocates for open, multi-stakeholder benchmarks that accurately reflect legal tasks and customer impact. The author emphasizes the need for better evaluation methods and user-centric metrics in assessing AI performance in legal contexts.
Fei-Fei Li has raised $230 million for her new AI startup, World Labs, with investors including Andreessen Horowitz, Ashton Kutcher, and Nvidia. The company aims to develop software that utilizes images and data to create "large world models" for decision-making in three-dimensional environments. This funding reflects ongoing investor interest in advanced AI technologies.
Meta is restarting plans to train AI using public posts from U.K. users on Facebook and Instagram, incorporating regulatory feedback for a revised opt-out approach. Users will receive notifications, and the company aims to reflect British culture in its AI models. Previous regulatory concerns led to a pause, but Meta is now proceeding, despite ongoing scrutiny regarding data protection compliance.
The article discusses the impact of AI on the legal profession, highlighting its potential to disrupt traditional business models. It covers themes such as data protection, efficiency improvements, job security concerns, and the need for in-house training. While some law firms are adopting AI tools for efficiency, there are worries about the long-term effects on legal training and job roles.
AI's potential is limited by its current design, often reduced to simple tasks activated by buttons. While AI tools enhance productivity, they fail to redefine workflows or eliminate mundane tasks. True innovation requires startups to automate knowledge work entirely, moving beyond mere enhancements to existing software. The article emphasizes the need for ambitious visions that transcend current limitations.
Facebook admitted to scraping public data from Australian adult users to train AI, with no opt-out option, unlike in the EU. Meta's privacy director confirmed that all public posts since 2007 could be used unless set to private. The inquiry highlighted concerns over privacy laws in Australia compared to Europe, with calls for reform to protect user data.
UBS Group AG has created an AI tool that analyzes over 300,000 companies in under 20 seconds to assist clients with potential M&A deals. The tool, described as an M&A "co-pilot," generates buy-side ideas and identifies potential buyers in sell-side situations, according to Brice Bolinger, UBS head of M&A Switzerland, at a conference in Zurich.
Waymo's self-driving cars are involved in fewer crashes than human drivers, with most serious incidents caused by human drivers. Waymo reports a significant reduction in injury-causing crashes compared to typical human drivers in San Francisco and Phoenix. Despite some data errors, the overall analysis suggests that Waymo vehicles are safer on the road.
Nevada will use a Google AI system to recommend decisions for unemployment appeals hearings, aiming to expedite the process. Concerns about accuracy and potential bias have been raised, as the AI analyzes hearing transcripts to issue recommendations. The system intends to address a backlog of cases but must ensure human oversight to prevent errors that could affect claimants' benefits.
BP has signed a five-year deal with Palantir to enhance decision-making using AI in its oil and gas operations. The partnership aims to improve data analysis and operational performance through a "digital twin" of BP's sites. This follows a decade-long collaboration and includes measures to ensure safe AI deployment. BP is also increasing its tech investments under CEO Murray Auchincloss.
ESPN's AI-generated recap of Alex Morgan's final match failed to mention her, despite her significance as a two-time World Cup winner. The recap focused on the game's outcome and other players, raising concerns about the quality of AI-generated content. ESPN later published a separate article about Morgan's emotional farewell, but it was less visible than the AI recap.
The article discusses exploring novel research ideas for code generation using LLMs, focusing on temporal dependencies. It includes reviewer evaluations on novelty, feasibility, effectiveness, and excitement. The proposed methods are compared against baselines, revealing varying success rates. The execution agent's performance is analyzed, highlighting the need for careful verification of generated code implementations.
California lawmakers passed over a dozen bills to regulate AI, focusing on safety, discrimination, and protecting children. Key legislation includes Senate Bill 1047, addressing potential AI threats, and other bills targeting deepfakes and child safety. However, some proposed laws aimed at preventing discrimination were shelved, raising concerns about industry influence on regulation. The effectiveness of these measures remains uncertain amid ongoing debates about AI's societal impact.
Ilya Sutskever, co-founder of OpenAI, launched a new AI startup, Safe Superintelligence (SSI), raising $1 billion to develop safe AI systems. Valued at $5 billion, SSI aims to hire top talent and acquire computing power. Investors include Andreessen Horowitz and Sequoia Capital. Sutskever emphasizes a different approach to scaling AI compared to his previous work at OpenAI.
AlphaProteo is a new AI system that designs novel proteins to bind to target molecules, enhancing drug development and disease understanding. It outperforms existing methods in binding strength and success rates. Trained on extensive protein data, it aims to accelerate biological research while addressing biosecurity concerns through responsible development and collaboration with the scientific community.
OpenAI has surpassed 1 million paid users for its corporate ChatGPT versions, including ChatGPT Team and Enterprise, reflecting strong business demand. The company introduced these products to boost revenue and compete in the AI market. While growth is notable, the average number of users per corporate customer remains unclear. The majority of users are in the US, with significant interest from Germany, Japan, and the UK.
The US, EU, and UK are set to sign the world's first international AI treaty, focusing on human rights and accountability for AI systems. The treaty mandates respect for citizens' rights and provides legal recourse for violations. Meanwhile, the EU has implemented its own AI regulations, and California is drafting state-level AI laws, reflecting ongoing global efforts to regulate AI development and deployment.
The article critiques Chatbot Arena, a benchmark for AI models, highlighting biases from human raters and the lack of transparency in its evaluation process. While it provides real-time insights into model performance, concerns about user representation and commercial ties raise questions about its reliability as a definitive standard for measuring AI intelligence.
The U.S. Department of Justice has subpoenaed Nvidia as part of an antitrust investigation into its practices. Concerns include Nvidia making it difficult for buyers to switch suppliers and penalizing those not using its AI chips. The company faces scrutiny from regulators in multiple countries, and its stock has seen significant fluctuations amid investor concerns about AI investments.
X has permanently stopped Grok AI from using EU citizens' tweets following a court action by Ireland's Data Protection Commissioner. The DPC raised concerns about the processing of personal data for AI training without consent. The issue will now be referred to the European Data Protection Board for further adjudication and clarity on data processing regulations in AI models.
Companies have doubled their generative AI deployment efforts, with 66% of CIOs working on AI copilots, up from 32%. Microsoft Azure leads in AI inference spending, with 60% of respondents planning to increase investment. OpenAI models are used by 70% of companies, while Google Cloud usage is at 18%. Despite progress, Gartner predicts 30% of Gen AI projects may be abandoned by 2025.
Gartner reports that one-third of generative AI projects will be abandoned by 2025 due to struggles in proving value and high costs, ranging from $100,000 to $20 million. Despite challenges, some companies report benefits like revenue increases and productivity gains. The research highlights the need for a higher tolerance for future financial investments and warns of inadequate risk controls and poor data as potential pitfalls.
Volkswagen is introducing its ChatGPT voice assistant in the U.S. for models equipped with the IDA voice assistant, starting September 6. The rollout includes the 2025 Jetta, Jetta GLI, and 2024 ID.4 vehicles. A subscription is required, but the ID.4 and ID. Buzz offer three years of free access.
The article discusses AI safety, emphasizing the need for proactive measures, societal resilience, and adaptive research infrastructure. It outlines strategies for evaluating risks, stress-testing safety cases, and addressing AI welfare. As AI systems become more advanced, the focus shifts to ensuring alignment and making informed decisions, ultimately advocating for democratic oversight in high-stakes scenarios.
Canva is increasing subscription prices for its Teams service by over 300% due to new generative AI features. Users in the US may see annual fees rise from $120 to $500, while Australian users face similar hikes. Existing subscribers are being transitioned to the new pricing model, which has sparked backlash and potential cancellations in favor of Adobe products.
California passed a law requiring consent for using deceased performers' likenesses in AI-created replicas. SAG-AFTRA supports the legislation to protect estates' rights. The law, AB 1836, awaits Governor Newsom's signature, following another bill that tightens consent for living performers. The legislation aims to ensure ethical use of AI in media, reflecting ongoing efforts to enhance performer protections.
Amazon will launch a new AI-powered Alexa in October, using Anthropic's Claude instead of its own AI due to performance issues. The upgraded "Remarkable" Alexa will offer advanced features and be available as a paid subscription service. Amazon has invested $4 billion in Anthropic, aiming to boost Alexa's revenue amid challenges in the voice assistant market.
OpenAI is negotiating deals with publishers to address copyright issues after scraping their content without permission. These deals aim to prevent lawsuits, improve real-time access to information, and enhance OpenAI's reputation. The New York Times has filed a significant lawsuit against OpenAI, claiming copyright infringement. The outcome could reshape the AI landscape, affecting competition and the sustainability of answer engines.
Meta's Llama AI models are being utilized by companies like Goldman Sachs, AT&T, Nomura Holdings, DoorDash, and Accenture for tasks such as customer service and document review. The models have been downloaded nearly 350 million times since their release. Meta aims to establish Llama as an industry standard by providing competitive, open AI technology.
OpenAI announced that ChatGPT has over 200 million weekly active users, doubling since last November. Despite its leadership in generative AI chatbots, competition is fierce with major tech companies like Microsoft, Google, and Meta also vying for users. Additionally, 92% of Fortune 500 companies utilize OpenAI's products, and usage of its automated API has doubled since the release of GPT-4o mini in July.
OpenAI and Anthropic have signed agreements with the U.S. government for AI research and testing, amid regulatory scrutiny. The U.S. AI Safety Institute will access their AI models for evaluation and collaborate on safety improvements. This initiative aims to ensure safe and ethical AI development and is part of a broader effort to establish U.S. leadership in responsible AI practices.
Meta's open-source Llama AI models have seen a tenfold increase in downloads, reaching nearly 350 million. Major companies like Zoom, Spotify, and Goldman Sachs are adopting Llama for various applications. The rise of open-source AI is challenging closed-source models, prompting companies like OpenAI to reduce prices and innovate further. Meta's strategy of openness is fostering a vibrant AI ecosystem.
The article addresses the issue of information asymmetry in legal disputes by presenting a dataset of 310,876 U.S. civil lawsuits. It critiques traditional reputation-based law firm rankings, proposing an outcome-based ranking system that better predicts future performance. The study finds that as interactions between law firms increase, predictability of outcomes diminishes, challenging the notion that win rates stabilize. The aim is to provide a more equitable assessment of law firm quality, enhancing decision-making for litigants.
Employees primarily use AI to double-check work, contrary to management's intentions for initial research and workflow management. A lack of education and training contributes to this disconnect. Companies like PwC and JPMorgan Chase are implementing training initiatives to address AI skills gaps, which threaten overall enterprise progress.
Medium length section heading goes here
Morbi sed imperdiet in ipsum, adipiscing elit dui lectus. Tellus id scelerisque est ultricies ultricies. Duis est sit sed leo nisl, blandit elit sagittis. Quisque tristique consequat quam sed. Nisl at scelerisque amet nulla purus habitasse.
Nunc sed faucibus bibendum feugiat sed interdum. Ipsum egestas condimentum mi massa. In tincidunt pharetra consectetur sed duis facilisis metus. Etiam egestas in nec sed et. Quis lobortis at sit dictum eget nibh tortor commodo cursus.
Odio felis sagittis, morbi feugiat tortor vitae feugiat fusce aliquet. Nam elementum urna nisi aliquet erat dolor enim. Ornare id morbi eget ipsum. Aliquam senectus neque ut id eget consectetur dictum. Donec posuere pharetra odio consequat scelerisque et, nunc tortor. Nulla adipiscing erat a erat. Condimentum lorem posuere gravida enim posuere cursus diam.
Generative AI
Our Top 10 picks to learn more about Generative AI
Big Tech is going to have to live with more regulation but … regulators have to be wary about killing the goose that laid the golden egg, said University of Michigan law professor Daniel Crane.
AI's history includes the 'AI winter' in the mid-1970s to mid-1980s, marked by reduced interest and funding due to unmet expectations. This period shifted focus from mimicking human intelligence to specific AI applications, emphasizing realistic expectations and sustained investment. Lessons from this era remain relevant for today's AI advancements.
“Don’t exaggerate what your AI can do. And what the FTC means by this is that your performance claims have to have scientific support behind them, artificial intelligence expert Cara Hughes said at a webinar Thursday on the regulatory risks around AI.
While evolving AI models are likely to bring their own set of changes to the insurance industry, such as the creation of new, AI-bespoke policies, until then, companies will have to rely on a portfolio of insurance products for coverage.
Karim Lakhani is a professor at Harvard Business School who specializes in workplace technology and particularly AI. He’s done pioneering work in identifying how digital transformation has remade the world of business, and he’s the co-author of the 2020 book Competing in the Age of AI. Customers will expect AI-enhanced experiences with companies, he says, so business leaders must experiment, create sandboxes, run internal bootcamps, and develop AI use cases not just for technology workers, but for all employees. Change and change management are skills that are no longer optional for modern organizations.
by 3 Geeks (Ryan McClead, Greg Lambert, and Toby Brown) This is part 3 in a 3 part series. Part 1 questions Goldman’s Sachs data showing that 44% of of
I gave a talk on Sunday at North Bay Python where I attempted to summarize the last few years of development in the space of LLMs—Large Language Models, the technology …
<p>The 2023 legislative session has seen a surge in state AI laws proposed across the U.S., surpassing the number of AI laws proposed or passed in past legislative sessions. </p>
IAPP Summer Privacy Fellow Will Simpson compiles global regulatory approaches to AI governance.
Medium length section heading goes here
Morbi sed imperdiet in ipsum, adipiscing elit dui lectus. Tellus id scelerisque est ultricies ultricies. Duis est sit sed leo nisl, blandit elit sagittis. Quisque tristique consequat quam sed. Nisl at scelerisque amet nulla purus habitasse.
Nunc sed faucibus bibendum feugiat sed interdum. Ipsum egestas condimentum mi massa. In tincidunt pharetra consectetur sed duis facilisis metus. Etiam egestas in nec sed et. Quis lobortis at sit dictum eget nibh tortor commodo cursus.
Odio felis sagittis, morbi feugiat tortor vitae feugiat fusce aliquet. Nam elementum urna nisi aliquet erat dolor enim. Ornare id morbi eget ipsum. Aliquam senectus neque ut id eget consectetur dictum. Donec posuere pharetra odio consequat scelerisque et, nunc tortor. Nulla adipiscing erat a erat. Condimentum lorem posuere gravida enim posuere cursus diam.
Strategy and Scale in a Complex World
Our Top 10 picks to learn more about Strategy and Scale in a Complex World
“Companies are saying they need to make sure their disclosures are backed up by data and can respond when the questions come,” said Tara K. Giunta, co-chair of Paul Hastings’ ESG Risk, Strategy and Compliance Group.
CEOs at Home Depot Inc, Booking Holdings Inc. and other executives found themselves clashing with investors this proxy season as companies faced an unprecedented level of pushback on ESG policies.
Believe it or not, U.S. companies’ biggest antitrust irritant may not be Lina Khan’s Federal Trade Commission. International regulators—mainly in China and Britain—are increasingly elbowing their way into foreign deals that don’t obviously require their attention. The latest example is Intel’s ...
Intel has terminated its agreement to acquire Tower Semiconductor due to regulatory approval delays. The termination fee of $353 million will be paid to Tower. Intel's focus remains on advancing its system foundry plans and IDM 2.0 strategy, aiming to become a major external foundry. Intel Foundry Services (IFS) has shown significant progress, with over 300% YoY revenue increase in Q2 2023 and partnerships for advanced process technologies. The company aims to become the second-largest global external foundry by the end of the decade.
Statements on green and social investing seen as ‘fertile ground’ for enforcement division
How much does each country contribute to the $105 trillion world economy in 2023, and what nations are seeing their nominal GDPs shrink?
How to keep finding new ways to grow, year after year. | 26 comments on LinkedIn
Wilmer's congressional practice has helped clients prepare for seven hearings in the first five months of the year, six of which included CEOs.
Without Balance Your Organization Won’t Persist
If you are a company leader hoping to undertake a successful organizational change, you need to make sure your team is onboard and motivated to help make it happen. The following strategies can you help you better understand your employees’ perspectives. Start by creating audience personas that map to key employee segments in your company. Then interview individual employees in each segment to get a sample perspective on typical mindsets, and tailor your communication to match their mood. It’s also important to be as transparent as possible. While you may need to keep some facts private during a transition, the general rule is that the more informed your people are, the more they’ll be able to deal with discomfort. So, learn about your team’s specific fears, and acknowledge them openly. And make sure individuals at all levels feel included. A transformation won’t succeed without broad involvement.
Medium length section heading goes here
Morbi sed imperdiet in ipsum, adipiscing elit dui lectus. Tellus id scelerisque est ultricies ultricies. Duis est sit sed leo nisl, blandit elit sagittis. Quisque tristique consequat quam sed. Nisl at scelerisque amet nulla purus habitasse.
Nunc sed faucibus bibendum feugiat sed interdum. Ipsum egestas condimentum mi massa. In tincidunt pharetra consectetur sed duis facilisis metus. Etiam egestas in nec sed et. Quis lobortis at sit dictum eget nibh tortor commodo cursus.
Odio felis sagittis, morbi feugiat tortor vitae feugiat fusce aliquet. Nam elementum urna nisi aliquet erat dolor enim. Ornare id morbi eget ipsum. Aliquam senectus neque ut id eget consectetur dictum. Donec posuere pharetra odio consequat scelerisque et, nunc tortor. Nulla adipiscing erat a erat. Condimentum lorem posuere gravida enim posuere cursus diam.
Delivering Legal Services
Our Top 10 picks to learn more about Delivering Legal Services
Insiders and other commentators say the star London private equity partner will take 'many millions' worth of business with him, as others attribute his success to the unrivalled platform Kirkland offered.
Precarious law firm partnerships have been disrupted by pandemic-era working conditions, industry consultants, recruiters and firm leaders have observed. New alliances have been formed across offices at the expense of the ties that used to bind lawyers with regular office attendance.
“I think many people would rather work on a new problem than a settled problem. Here, there is a lot more opportunity to work on unsettled legal and policy questions, said Adam Kovacevich, CEO of Chamber of Progress.
A comprehensive overview of Integrated Law: the newest category in legal services which aims to solve for complex legal work at scale.
The summer of our discontents Two months ago, if you prompted Version 3 of the AI-art generator MidJourney to generate depictions of an "otter on a plane
Winter is coming and many legal departments will be left in the cold. Let's get a difficult conceptual issue out of the way. This is a long post that some
Why law departments solve for the local optimum at the expense of the global optimum. Why pursue the path of least resistance.
Legal tech IPOs have gained momentum, including LegalZoom, Intapp, and CS Disco. The legal tech market, comprising legal tech, compliance (RegTech), and contracting (KTech), is estimated to be worth $14 billion, set to triple in size in 5 years. Increasing legal complexity, rising costs, data explosion, and regulatory changes are driving demand for legal expertise and technology. The legal tech boom is rooted in unmet needs, attracting significant venture capital investments.
40 years ago, several idealist young lawyers walked away from safe and more established career paths to pursue the idea of providing affordable legal services to working- and middle-class people. This was the storefront revolution. Although the revolutio failed, it contains powerful lessons for all lawyers.
I was recently hired by the State Bar of California to write landscape report on the changing nature of the legal services market. This comes after the State Bar was reorganized for focus exclusively on its regulatory function. The report is now posted on the State Bar website.
Medium length section heading goes here
Morbi sed imperdiet in ipsum, adipiscing elit dui lectus. Tellus id scelerisque est ultricies ultricies. Duis est sit sed leo nisl, blandit elit sagittis. Quisque tristique consequat quam sed. Nisl at scelerisque amet nulla purus habitasse.
Nunc sed faucibus bibendum feugiat sed interdum. Ipsum egestas condimentum mi massa. In tincidunt pharetra consectetur sed duis facilisis metus. Etiam egestas in nec sed et. Quis lobortis at sit dictum eget nibh tortor commodo cursus.
Odio felis sagittis, morbi feugiat tortor vitae feugiat fusce aliquet. Nam elementum urna nisi aliquet erat dolor enim. Ornare id morbi eget ipsum. Aliquam senectus neque ut id eget consectetur dictum. Donec posuere pharetra odio consequat scelerisque et, nunc tortor. Nulla adipiscing erat a erat. Condimentum lorem posuere gravida enim posuere cursus diam.
Worth Reading
Just some books we ❤️
"A New Way to Think" by Roger Martin explores innovative approaches to common business challenges, emphasizing rethinking models and strategies for better outcomes.
The Third Edition of "Designing Organizations" offers a strategic guide to creating and managing effective organizations, using the Star Model framework and incorporating modern examples and concepts.
Everett M. Rogers explains how new ideas spread through communication channels over time, focusing on innovation adoption and the impact of the Internet on diffusion processes.
Watts challenges common sense and historical examples, revealing how human behavior prediction often fails due to complex dynamics.
Nassim Nicholas Taleb's "Fooled by Randomness" challenges our understanding of luck, skill, and perception in business and life.
"Four Thousand Weeks" explores life's brevity, time management, and meaningful living through philosophical insights, offering practical alternatives to common approaches.
Learn to create impactful visualizations with Good Charts, a guide that teaches the art of effective data communication, combining research and practical insights for better understanding and persuasion.
"How Big Things Get Done" by Bent Flyvbjerg explores the factors that lead projects to succeed or fail, offering principles like understanding odds, planning, teamwork, and mastering uncertainty.
"Influence" by Dr. Robert B. Cialdini explores six principles of persuasion: reciprocation, commitment, social proof, liking, authority, and scarcity. Learn how to ethically apply these principles for effective communication.
"Mistakes Were Made (But Not by Me)" delves into self-justification, exploring how the brain avoids responsibility through self-deception.
Stay Informed with LexFusion
Explore news and updates about LexFusion.
Special Post: LexFusion offers new way to design, bundle, and buy one-to-many legal solutions (203) | Legal Evolution
Getting naked with colleagues and clients (267) | Legal Evolution
Let’s Forge the Future Together
Interested in joining our roster of legal innovators or simply curious about the world of legal tech? Reach out! Our team is eager to connect, collaborate, and contribute to the ever-evolving legal landscape.