fbpx
Stanford Researchers Urge Tech Companies to be More Transparent Stanford Researchers Urge Tech Companies to be More Transparent
Researchers from Stanford University have released a report urging companies such as OpenAI and Google to reveal more information related to... Stanford Researchers Urge Tech Companies to be More Transparent

Researchers from Stanford University have released a report urging companies such as OpenAI and Google to reveal more information related to data and human labor. In a report from Reuters, authors of the report pointed to foundational models and the need for greater transparency.

Stanford professor Percy Liang, a researcher behind the Foundation Model Transparency Index said, “It is clear over the last three years that transparency is on the decline while capability is going through the roof…We view this as highly problematic because we’ve seen in other areas like social media that when transparency goes down, bad things can happen as a result.

The models in question, foundational models are AI systems that are trained on massive datasets that can perform a diverse series of tasks. This could range from code generation to text, and more. As expected, the companies behind these models are the main drivers behind the current rise of generative AI technology.

The report points to how as these models become more integrated in businesses and industries across the globe, being able to better understand bias and limitation has become critical. The index-linked above graded ten popular models on one hundred different transparency indicators.

This included training data and computing power. As a result, all models scored as “unimpressively.” This even includes Meta’s LLAMA 2 which scored a 53 out of 100. OpenAI GPT-4 wasn’t far behind with a score of 47, while Amazon’s Titan ranked the lowest at 11.

Authors of the index hope that the release of the report will help push tech companies to become more transparent with their foundational models. They’re also hoping that the report could act as a starting point for governments hoping to learn how to best regulate AI.

The project was done through the Stanford Institute for Human-Centered Artificial Intelligence for Research on Foundational Models. This comes two months after meetings between lawmakers and tech executives in Washington D.C and after major tech companies such as Microsoft, Google, and others openly began pushing for greater government understanding of AI.

Though lawmakers seem to deadlock in Washington, the regulation is a hot topic globally as nations and regions look to learn how to grapple with AI in a way that doesn’t reduce the projected positive impacts of the technology.

ODSC Team

ODSC Team

ODSC gathers the attendees, presenters, and companies that are shaping the present and future of data science and AI. ODSC hosts one of the largest gatherings of professional data scientists with major conferences in USA, Europe, and Asia.

1