I
n the U.S., legislation around generative AI (GAI)
remains nascent. The first concrete steps toward
lawmaking were taken back in April when the
Department of Commerce asked for public
comment about policy around GAI.
The recommendations and data collected during
that public exercise will help the Commerce Department
shape legislation in a way that increases transparency,
prevents bias, and improves
accountability in AI systems.
Ultimately, the goal of AI
policymaking in the U.S.
is not about restriction, it's
about ensuring that AI tools
are working in the ways they
promise to. More initiatives
have come out recently in
the U.S as GAI is adopted for
business use. We have also seen
our first case law opinions about
GAI in the areas copyright
infringement.
The European Union,
on the other hand, is scoping
regulation more broadly and is
considering a far-reaching legal
framework to govern AI (see: the EU AI Act), classifying
systems by risk and mandating development and user
requirements accordingly.
The State of GAI Regulation
It's important to note that existing GAI technology–from
large language models (LLM) to neural pathways—are
essentially research projects, including popular Chat GPT.
To test and improve that research, LLM organizations
partner with or are sponsored by companies who
are looking to monetize the technology. The research
organizations that develop these products are still learning
how their products work—offering their models to third
parties in specific industries so they can fund their projects
and gather data about how people interact with them and
develop use cases as the technology evolves.
By nature, GAI is constantly changing, which means
that structuring laws to control GAI will continue to be
challenging for years to come.
Because ChatGPT is still in the
research phase (and because it's
constantly evolving), lawmakers
have to be careful to construct
frameworks that guide the laws;
any specific law that's put into
place right now will likely be
outdated by the time new AI
models are out of development
and fully monetized.
For example, any kind of
AI, learning model, or neural
network—in its current form—is
going to reflect bias, since these
models mirror a world rife with
bias, and provide biased outputs
from a biased, limiting set of
data. Currently, research is exploring how bias plays into
and off of these models, but until we understand how it
bleeds through, GAI can't be regulated effectively for bias.
Similarly, GAI poses risks to businesses in the form
of liability, privacy, and cybersecurity. In certain cases,
AI systems can expose private data, trade secrets, and
other intellectual property—threatening business security
and opening the door to legal issues, such as the use of
text protected by copyright laws. Research and regulation
around these AI problems also remain in the early stages.
15
I L T A N E T . O R G
"Structuring laws
to control GAI
will continue to
be challenging for
years to come."