7493 stories
·
0 followers

It’s the laws that allow this that are the true crime.

1 Share
It’s the laws that allow this that are the true crime. submitted by /u/KARMA__FARMER__ to r/FluentInFinance
[link] [comments]
Read the whole story
tain
1 day ago
reply
Share this story
Delete

AI Lab PleIAs Releases Fully Open Dataset, as AMD, Ai2 Release Open AI Models

1 Share
French private AI lab PleIAs "is committed to training LLMs in the open," they write in a blog post at Mozilla.org. "This means not only releasing our models but also being open about every aspect, from the training data to the training code. We define 'open' strictly: all data must be both accessible and under permissive licenses." Wednesday PleIAs announced they were releasing the largest open multilingual pretraining dataset, according to their blog post at HuggingFace: Many have claimed that training large language models requires copyrighted data, making truly open AI development impossible. Today, Pleias is proving otherwise with the release of Common Corpus (part of the AI Alliance Open Trusted Data Initiative) — the largest fully open multilingual dataset for training LLMs, containing over 2 trillion tokens of permissibly licensed content with provenance information (2,003,039,184,047 tokens). As developers are responding to pressures from new regulations like the EU AI Act, Common Corpus goes beyond compliance by making our entire permissibly licensed dataset freely available on HuggingFace, with detailed documentation of every data source. We have taken extensive steps to ensure that the dataset is high-quality and is curated to train powerful models. Through this release, we are demonstrating that there doesn't have to be such a [heavy] trade-off between openness and performance. Common Corpus is: — Truly Open: contains only data that is permissively licensed and provenance is documented — Multilingual: mostly representing English and French data, but contains at least 1B tokens for over 30 languages — Diverse: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers — Extensively Curated: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed. Common corpus builds on a growing ecosystem of large, open datasets, such as Dolma, FineWeb, RefinedWeb. The Common Pile currently in preparation under the coordination of Eleuther is built around the same principle of using permissible content in English language and, unsurprisingly, there were many opportunities for collaborations and shared efforts. But even together, these datasets do not provide enough training data for models much larger than a few billion parameters. So in order to expand the options for open model training, we still need more open data... Based on an analysis of 1 million user interactions with ChatGPT, the plurality of user requests are for creative compositions... The kind of content we actually need — like creative writing — is usually tied up in copyright restrictions. Common Corpus tackles these challenges through five carefully curated collections... Last week AMD also released its first series of fully open 1 billion parameter language models, AMD OLMo. And last month VentureBeat reported that the non-profit Allen Institute for AI had unveiled Molmo, "an open-source family of state-of-the-art multimodal AI models which outpeform top proprietary rivals including OpenAI's GPT-4o, Anthropic's Claude 3.5 Sonnet, and Google's Gemini 1.5 on several third-party benchmarks."

Read more of this story at Slashdot.

Read the whole story
tain
4 days ago
reply
Share this story
Delete

Not a Mass resident, but really liked this comparison

1 Share
Not a Mass resident, but really liked this comparison submitted by /u/iv2892 to r/massachusetts
[link] [comments]
Read the whole story
tain
4 days ago
reply
Share this story
Delete

DHS unveils ‘practical’ AI responsibilities for critical infrastructure

1 Share

The Department of Homeland Security is pushing members of the critical infrastructure community to adopt practices aimed at ensuring the safe and secure use of artificial intelligence.

DHS today unveiled the new guidance, “Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure.” It proposes a series of voluntary responsibilities for the use of AI in the 16 U.S. critical infrastructure sectors.

The responsibilities are divided across five groups: cloud and compute infrastructure providers; AI developers; critical infrastructure owners and operators; civil society; and the public sector.

The detailed guidelines touch on a wide range of issues, including cloud environments, AI model and system design, data governance, deployment considerations, and the monitoring of AI use across critical infrastructure.

The framework was developed in conjunction with DHS’ AI Safety and Security Board. The board includes representatives from top AI companies, as well as executives of major computing and semiconductor firms, government representatives and civil society members.

Mayorkas said industry was “intensely engaged” in development of the framework.

“Industry was very, very helpful in ensuring that the guidelines are practical,” Mayorkas told reporters today. “This is not a document that advances theories. This is a document that provides practical guidance that can and should be implemented to advance safety and security.”

The framework comes after Cybersecurity and Infrastructure Security Agency identified three major categories of AI-related risks to critical infrastructure: attacks using AI, attacks targeting the use of AI, and design and implementation failures.

“For owners and operators of critical infrastructure whose essential services and functions the public depends on daily, understanding the nature of these vulnerabilities and addressing them accordingly is not merely an operational requirement but a national imperative,” the document states.

Public sector AI guidance

For government agencies in the “public sector” group, the framework recommends they “ensure that relevant private sector entities across all sectors are appropriately protecting the rights of individuals and communities, as well as a responsibility to respond and support the American public in times of crisis or emergency.”

For instance, the framework suggests agencies have the opportunity to use law and regulation to advance AI standards. “Laws and regulations should protect individuals’ fundamental rights, help drive innovation, advance the harmonization of different legal requirements, simplify compliance, and clarify incident reporting processes,” the document states.

At the same time, the framework also recommends the public sector “responsibly leverage AI to improve the functioning of critical infrastructure.”

“It should prioritize the development of, and funding for, programs that advance responsible AI practices in government services,” the document continues. “Public sector entities should engage with civil society and each other regarding the public sector’s use of AI and avoid using AI in a manner that produces discriminatory outcomes, infringes upon personal privacy, or violates other legal rights. Public sector entities should not fund discriminatory technologies.”

Outlook under Trump

However, the future use of the AI guidance within the federal government is uncertain. It was developed as part of President Joe Biden’s sweeping AI executive order, and President-elect Donald Trump has said he would repeal the EO.

“I of course cannot speak to the incoming administration’s approach to the board that we have assembled,” Mayorkas said. “I certainly hope it persists.”

But he said the framework “will endure,” adding that all 23 members of the AI Safety and Security Board support the practices.

“We expect the board members to implement the guidelines, to catalyze other organizations in their respective spheres and across the ecosystem, to adopt and implement the guidelines as well, and to have this take hold and to become the framework that will assist in in driving harmonization, which is so key to our leadership,” Mayorkas said.

DHS AI use cases

DHS has also been touting its own use of AI under several pilot projects.

“Our pilot projects have demonstrated tremendous AI capabilities to advance our mission, and so we are taking those pilots and actually integrating the AI successes into our operations,” Mayorkas said.

In late October, DHS announced the completion of three generative AI pilots.

As part of one project, U.S. Citizenship and Immigration Services used a GenAI tool to help train immigration officers on interacting with refugee and asylum seekers. DHS said the pilot was only used for training and not to make immigration eligibility determinations.

Based on the results of the USCIS pilot, DHS is examining how generative AI could be used for other training “as a supplemental tool to better prepare the next generation of DHS officers.”

Another DHS pilot project involved Homeland Security Investigations using large language models to produce summaries of law enforcement reports.

“The pilot showed that these were valuable tools to enhance investigative processes,” DHS announced in its press release. “The HSI pilot, which was developed using an open-source AI model, found that open-source models provided the flexibility necessary to experiment and measure effectiveness. HSI professionals continue to test and optimize the use of open-source models in supporting law enforcement investigations.”

Meanwhile, the third completed pilot involved the Federal Emergency Management Agency using an LLM to help state and local communities draft community resilience plans.

“FEMA learned that increasing user understanding of AI and receiving feedback directly from community users is an important first step to integrating GenAI into any existing process,” DHS said. “FEMA is using lessons learned from the pilot to help determine how the technology can best support their mission.”

The post DHS unveils ‘practical’ AI responsibilities for critical infrastructure first appeared on Federal News Network.



Read the whole story
tain
5 days ago
reply
Share this story
Delete

Dems try to actually be useful challenge

1 Share
Dems try to actually be useful challenge submitted by /u/BaldHourGlass667 to r/BlackPeopleTwitter
[link] [comments]
Read the whole story
tain
8 days ago
reply
Share this story
Delete

How wonderfully refreshing

1 Share
How wonderfully refreshing submitted by /u/Tobias-Tawanda to r/WhitePeopleTwitter
[link] [comments]
Read the whole story
tain
10 days ago
reply
Share this story
Delete
Next Page of Stories