top of page
Search
Writer's picturejeangww

The New Texas AI Bill (TRAIGA) Could be Smarter


By Jonathan K. Hustis, Member, Fulton Jeang, PLLC, Dallas, Texas USA

 

Texas is moving ahead with a bold proposal to regulate the development and use of artificial intelligence systems. This fall, Texas House Representative Giovanni Capriglione introduced a bill to pass an act that would be known as the Texas Responsible AI Governance Act (or “TRAIGA”). Representative Capriglione is a Republican from Keller, Texas, and he heads the House’s Select Committee on Artificial Intelligence & Emerging Technologies. Congratulations to Rep. Capriglione, the Committee, and Texas, for getting this bill out for consideration. I don’t think there is anything easy about this for legislators or the government, but it is work that needs to be done.


Here are some first thoughts, on reading the bill once and also reading some early comments. The paper is longer and took more time than I expected. Why? Because this is dense stuff to figure out, and it is regulating conduct that most of us don’t yet understand or participate in. But, make no mistake, we are starting to participate in greater and greater numbers, with more and more consequences. I used AI-assisted search engines and editors in writing this article. I only started using AI tools four months ago in my law practice, and now use them regularly.


This article introduces the key concepts and definitions of TRAIGA and raises questions about many of them. The questions primarily are about areas where the bill seems too broadly worded for its intent, or in others where the intended application seems too broad as a policy matter to this writer, an ordinary citizen and legal practitioner, not a policy expert. Something like this bill needs to be passed, and TRAIGA is a good starting point for this work to be done and a better bill to be enacted in 2025.


               CONTEXT AND GENERAL OVERVIEW

TRAIGA in Context of Other States’ Regulation of AI.

If TRAIGA passes in 2025, it appears that Texas would be the second state to pass comprehensive statutory regulation of AI, and (forgive me, Colorado) the first highly populated state with major economic clout to do so.


According to Dean Ball, a research fellow at George Mason University’s Mercatus Center, TRAIGA is the fourth comprehensive proposal for state regulation of AI:

  1. Virginia House Bill 747, introduced by Representative Michelle Maldonado (did not pass)

  2. Colorado Senate Bill 205, introduced by Senator Robert Rodriguez (passed and signed by Governor Polis)

  3. Connecticut Senate Bill 2, introduced by Senator James Maroney (did not pass, and also was, incidentally, the least offensive of this family of bills)

  4. And now TRAIGA, introduced by Representative Capriglione”


Ball’s article on TRAIGA, titled Hold My Beer, California is a well-written, somewhat polemic anti-regulatory critique of TRAIGA with a perspective different from mine. It a well-considered point of view, worth reading.  Also, it is fun to read. Check it out.


Points of Optimism.

The bill is comprehensive in scope. It addresses across-the-board dangers of the commercial development and deployment of AI that are being pointed out by responsible policy commentators and experts in the AI technology industry. The speed and power of the development and introduction of AI into the marketplace, seem to introduce an unprecedentedly broad risk of badly disruptive, harmful consequences to our society, economy, and the republic. The risks include disruption of our public information and voting processes through the rapid spread of distorted and falsified political information, unprecedented illegal invasion of privacy and public or corrupt use of private information by bad actors, catastrophic errors in systems that affect or are designed to protect public safety or natural resources, etc., rapid disruption of jobs and the economy without members of the public having tools to recover from the disruption, etc.  Given my severe view of the risks, I’d like us to accept Ben Franklin’s challenge to help keep this republic, by introducing regulatory responses commensurate to the risks, if we can. Because of the comprehensive risks, a comprehensive state regulation of AI in the digital environment seems a necessary step. Even though mistakes in regulation will occur, they can be rectified more safely and quickly than by abdicating this responsibility.  Public officials should become adept with AI risks and benefits, and regulate it responsibly, rather than let a fear of overregulation let us abdicate that responsibility.


The bill allocates responsibility to the central constituents and actors in the AI industry.  The central players in the bill are AI developers, AI distributors, and AI users (called deployers). The three terms are defined in terms of persons doing business in Texas and engaging in the defined activities. A possible improvement would be to also regulate the government’s use of AI more specifically, but I will not explore that here.


In my view, the enumerated groups need to be responsible for the risks they create to other people by introducing AI into the marketplace. Consumers don’t have any more control over the risks of bad AI than a pedestrian has over bad drivers. But a consumer can’t opt out of being a consumer any more than people can opt out of the necessity of walking on the sidewalk sometimes. AI is moving at high speed and has a big impact. To protect consumers, the drivers of AI introduction need some direction and control, as well as responsibility and accountability for the consequences of their commercial use of AI. This includes the users, the deployers. I use AI very carefully in my law practice. My clients need to know that I am educated and responsible in my use of it, and that I am diligent to avoid being misled by AI hallucinations that are occurring regularly at this stage of the art.

The bill focuses its regulatory efforts on the concept of high-risk AI systems. The bill recognizes the regulatory responsibility of the state to focus on specific impacts that may come from rapidly proliferating development and use of AI systems, and rather than regulate all of them, it tries to focus on those that may do harm if left unregulated. At least I think the intended focus is on harm. It should be.  Here’s a place, though, where TRAIGA’s language seems to need further work.


Caveat:  For example, the definition of risk in TRAIGA seems to be value neutral.  It includes the degree of consequences as a component of risk without tying to a negative value, such as harm. By degree of consequences, what is meant?  If not harm, then is it some kind of measure of intensity of impact on the status quo without regard to harm or benefit? Although some standards organizations and logicians like to equate risk as uncertainty, good or bad, most of us don’t usually talk about the risk of benefits occurring.  Instead, we usually talk about the risk of bad stuff happening and, let’s say, the opportunity for benefits. The term high-impact AI systems would better fit a value-neutral TRAIGA definition, but the point is that it should not be value neutral. Who wants to overregulate high-impact AI systems if the impact is beneficial?  Better to define risk with specific reference to potential harm. Harmful risk works for me, or risk of harm.


The regulation does impose burdens on businesses. Regulation itself imposes costs that inevitably will be passed to consumers in the main players’ pursuit of profit. It does tend to stifle innovation with restrictions and bureaucracy. We should be careful that the regulations are focused on reducing harmful risks and not the social engineering of mere uncertainties. This is a problem that needs more attention.


TRAIGA attempts to be specific in its outright prohibition of certain AI systems.  There is a categorization of some AI systems as being beyond high-risk. These are treated as being simply unacceptable in their risk attributes. There are seven prohibitions. They look overbroad or too vague at this stage of the legislation. I can see where, after more careful consideration, better wording, and some future lessons from regulation under the sandbox program further discussed below, there may be some areas of AI-system use or deployment that should simply be prohibited. The lines should be drawn differently from those in the current version of the bill.


TRAIGA was influenced by broad study and knowledgeable industry representatives. According to Mr. Ball, the four states’ legislation attempts cited above, including Texas’s TRAIGA bill, “emerged out of a multistakeholder process led by the Future of Privacy Forum, an organization whose members include a large swath of American industry, including Anthropic, Apple, Google, Meta, Microsoft, and OpenAI (though not Nvidia). Many blueblooded academics, lawyers, and others (including some friends of mine) sit on their Board of Advisors.”  In fact, Mr. Ball’s respect for them is sufficiently high that he seems mystified at how they would come up with results he so strongly criticizes. My approach is to take Mr. Ball seriously on both counts. Let’s dig in with goodwill to positively influence the continued drafting and review and passage of TRAIGA in 2025.  Reach out to Representative Capriglione and others on the Committee with your thoughts and concerns.


Texas Needs Responsible and User-Friendly Regulation of the AI Industry. Using AI, more powerful disinformation is being used to drive more powerful distortion of culture, resource allocation, policy views and voting behavior, as well as to create unnecessary, unexpected economic hardship for ordinary citizens. The proliferation of technology that may invade individual privacy, drown responsible public discourse, overwhelm permissible commercial speech with deception, and inculcate ignorance needs to be dealt with in a way that encourages rather than shuts down open discourse.


At the same time, the possible benefits of AI in the marketplace for companies, individuals, and governments seem well-documented, obvious, remarkable, and worth pursuing. Healthcare advancements, productivity and efficiency increases in business, climate change mitigation through optimized energy consumption and more refined development of sustainable agriculture practices, and improvement of transportation are all areas identified by my research (using AI tools).  There seem to be well researched and reasonably held opinions by experts, based on the further, source-checking I did behind the initial AI answers. I would encourage that AI systems be used in these and other areas. As a reasonably well-informed citizen, with a healthy helping skepticism, I don’t think Texas or any other place involved in national and international commerce has the luxury of ignoring or stifling AI development.

 

A LOOK AT THE BILL

Why Regulate? What are the Objectives?

The Sandbox Program.  TRAIGA’s regulatory objectives are most explicit towards the end of the bill, for example in Chapter 552 where the regulatory framework called a sandbox program is introduced, and where an Artificial Intelligence Council, (the council) is defined and established under Chapter 553.  From these provisions, one can infer that the TRAIGA regulation is intended to maintain existing data privacy and protection standards in the development and deployment of AI systems, protect consumers, protect individuals’ privacy, and protect public safety. In more detail, the mandate of the sandbox program roughly breaks down as follows:

(1)     promote safe innovation and use of artificial intelligence across various sectors including healthcare, finance, education, and public services;

 

(2)     strike a balance between promoting the deployment of AI systems and protecting consumers, privacy, and public safety;

 

(3)     permit the development and testing of AI systems free of certain regulatory requirements (I assume that this means innovation-stifling requirements), provided that the development and testing are done within the guidelines of the sandbox program;

 

(4)     regulate the development and testing within the sandbox program by requiring: a) monitoring and analysis of impacts on consumers, privacy and public safety, b) plans to mitigate adverse consequences that might occur within or escape the sandbox, c) proof of compliance with applicable federal laws governing AI;

 

(5)     require quarterly reports during the sandbox testing with detailed performance metrics, updates on risk mitigation, and feedback from consumers and stakeholders.

 

The Council’s Authority.  The council is given various powers of oversight and compliance that it exercises as part of the existing Texas Department of Information Resources (the department).  Members of the council must be qualified in some combination of one or more of AI technologies, data privacy and security, ethics in technology and law, public policy or regulation, or risk management or safety related to AI systems. Again, one can infer that these qualifications indicate the areas of risk management for which the council will be responsible. The council may exercise its authority in these areas with advisory opinions, rule-making, and standards development within the areas of its duties. The duties include aligning its pronouncements with state laws on artificial intelligence, technology, data security, and consumer protection, and conducting training programs for state agencies and local governments on the ethical use of artificial intelligence systems.


Impact on Personal Data Privacy and Security Regulations. TRAIGA also amends the Texas Business & Commerce Code sections involving consumer data rights, by adding a consumer right to know whether one’s personal data will be used in an AI system and for what purposes, and a right to opt out of the sale or sharing of personal data for use in AI systems. These amendments also obligate controllers of personal data under existing privacy statutes to implement a full array of reasonable practices relative to personal data in the context of processing by an AI system, and to give a privacy notice acknowledgment of any processing of personal data for the AI system.  Similar augmentation related to AI systems is added to the Business & Commerce Code.  These clarify a data processor’s obligations specifically, to help the controller comply with the controller’s obligations for any AI system-related processing of personal data.

 

What’s an Artificial Intelligence System? A Normal Definition.

TRAIGA defines an artificial intelligence system as a machine-based system capable of:

 

(A) perceiving an environment through data acquisition and processing and interpreting the derived information to take an action or actions or to imitate intelligent behavior given a specific goal; and

 

(B) learning and adapting behavior by analyzing how the environment is affected by prior actions.

 

In looking around a little, with the help of AI tools, I determined that this is a fairly “standard” common definition that is usable for the purpose.


What’s A High-Risk AI System? A Problem Here.

Here, the basic definition is simple and broad, but not actionably clear by itself: "High-risk artificial intelligence system" means any artificial intelligence system that, when deployed, makes, or is a contributing factor in making, a consequential decision.

 

To clarify itself, the first filter that TRAIGA applies is to distinguish that a high-risk artificial intelligence system is not a prohibited system. TRAIGA by definition excludes from high-risk AI systems any systems that use or deploy the practices that Subchapter B of the statute outright prohibits.  I express concerns about the prohibitions further below. So, TRAIGA governs the use of high-risk AI systems. It simply prohibits, and does not otherwise regulate, those that employ the prohibited AI practices.

 

The definition of high-risk artificial intelligence system then excludes a long list of technologies. By definition, then, these prohibited technologies are not high-risk AI systems. In fact, when I read them, they might or might not even be AI systems at all.  However, then, in a sort of reverse fake hand-off (read: “hard to follow”) the definition adds a blanket, conditional exception to all of the excluded technologies, making some of them high-risk systems. That is, the technologies are not high-risk AI systems unless the technologies, when deployed, make, or are a contributing factor in making, a consequential decision. That’s confusing.

 

Here's my confusion. Let’s assume that in a given instance a technology is not an artificial intelligence system as defined in TRAIGA.  Then assume an instance where, when the technology is deployed, the user then uses it or its output to make, or uses it or its output as a contributing factor in making, a consequential decision.  Does that technology thereby become an artificial intelligence system under TRAIGA?  How does that work?  For example, each data storage and calculator is an enumerated technology excluded by TRAIGA from being a high-risk artificial intelligence system. Neither even appears to be, by itself, or need to operate as part of, an artificial intelligence system under TRAIGA. A data storage system or a calculator can certainly be deployed by a user in the process of the user making a consequential decision, but the technology does not, for example, learn[… or ] adapt[…]behavior by analyzing how the environment is affected by prior actions. The user does that. So, the technology never meets the definition of artificial intelligence system.  How does it suddenly become a high-risk artificial intelligence system to be regulated under TRAIGA?   TRAIGA should not be about making dangerous decisions with any technology other than AI, should it?

 

Here's the text. It’s a long list, but worth perusing. See if it’s clearer for you than it was for me, how this is going to work:

 

(13) "High-risk artificial intelligence system" means any artificial intelligence system that, when deployed, makes, or is a contributing factor in making, a consequential decision. The term does not include:

(A) an artificial intelligence system if the artificial intelligence system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not intended to replace or influence a previously completed  human assessment without sufficient human review;

(B) an artificial intelligence system that violates a provision of Subchapter B;

(C)  the  following  technologies,  unless  the technologies, when deployed, make, or are a contributing factor in making, a consequential decision:

(i) anti-malware;

(ii) anti-virus;

(iii) calculators;

(iv) cybersecurity;

(v) databases;

(vi) data storage;

(vii) firewall;

(viii) internet domain registration;

(ix) internet website loading;

(x) networking;

(xi) spam- and robocall-filtering;

(xii) spell-checking;

(xiii) spreadsheets;

(xiv) web caching;

(xv) web hosting or any similar technology; or

(xvi) any technology that solely communicates in natural language for the sole purpose of providing users with information, making referrals or recommendations, and answering questions and is subject to an accepted use policy that prohibits generating content that is discriminatory or harmful, as long as the system does not violate any provision listed in Subchapter B.

 

The Seven Prohibitions.  The area of prohibitions is another spot where I lost some of my initial good feelings about TRAIGA. These prohibitions look overbroad and unnecessary. The prohibitions as written are an intrusion on technological innovation, and they do not appear to serve the public good. My bias is that the general public would be better served by not prohibiting development and deployment outright, but rather by forbidding the causing of legal injury or harm in the development or deployment.  Here are the details.


1)        Manipulation of Human Behavior to Circumvent Informed Decision-Making.  That heading looks good in theory, but when you get to the specifics in the description, then I am troubled. It forbids using “subliminal techniques beyond  a person’s consciousness, or purposefully manipulative or deceptive techniques, with the objective or the effect of  materially distorting the behavior of a person or a group of  persons by appreciably impairing their ability to make an informed  decision, thereby causing a person to make a decision that the person would not have otherwise made, in a manner that causes or is likely to cause significant harm to that person or another person or group of persons.”  Whew. Note first that this definition does include a notion of harm, and thus is intended to regulate a harmful risk. That addresses, in this case, the value-neutral formulation of high risk complained about above. Note, second, another favorable comment, that part of the prohibition is about deceptive techniques. I like that. It seems that with that phrase we are forbidding fraudulent deception. Attorney generals already have responsibility for prohibiting and preventing fraudulent deception in a number of areas. Making it prohibited in the use of AI systems might or might not be redundant, but it doesn’t trouble me.

 

Still, the overall wording of this prohibition against subliminal techniques does trouble me, because it is not confined to deceptive or fraudulent behavior.  What does it mean to impair a person’s ability to make an informed decision, and how could subliminal or purposefully manipulative techniques do this? We are constantly challenged with subliminal messages and manipulative techniques that challenge our ability to pause and make an informed decision. Isn’t the advertising industry at least partially and legitimately based on its ability to place these messages in our consciousness or subconscious?  At what point does this challenge on our part to be aware lead to an impairment and not just a challenge? And in what people? Is there a responsibility on the individual citizen’s part not to be impaired by manipulative and even subliminal behavior, to minimize our moments of weakness? Is this really a governable thing, and if not, what are these powers of governance going to be applied to? Should AI innovation be stopped in these areas entirely with a prohibition? Or is this a high risk situation that perhaps needs some oversight and monitoring to recognize and avoid harmful extremes?

 

 Even if it is aimed for the most part at bad, socially harmful, deceptive behavior, it seems that under TRAIGA an elected attorney general who is a member of one party or the other may regulate AI-generated political speech that would be otherwise permissible, just because it is AI-generated.

 

Another issue is that in this case what are the harms we are talking about?

 

My general citizen’s understanding is that manipulation, subliminal appeal, and changing people’s behavior to affect their decisions, although often odious and not what you teach your children to do, is part of our normal political process.  Short of outright lies and bribes, it’s permitted in politics. Furthermore a lot of lying and bribes happen anyway, permitted or not. As citizens, our remedy is to inform ourselves using alternative sources, and to inoculate ourselves by learning how to inform ourselves through critical discourse, reading and thinking. This is not speech that the attorney general ought to manage as such, unless it is demonstrably fraudulent.

 

In short, I’m not sure that AI imposes a different in-kind risk on our ability to make an informed decision than the normal political process as illustrated in the media today. I think that the risk is in the degree of amplification and sophistication of these distortions and manipulations. So, why not just high-risk? This concern may be a step into policy, psychology, and technology concerns beyond my professional credentials, but it’s my concern as an informed citizen.

 

2)        Social Scoring.  This prohibits the development or use of an artificial AI system for the evaluation or classification of natural persons or groups of natural persons based on their social behavior or known, inferred, or predicted personal characteristics with the intent to determine a social score or similar categorical estimation or valuation of a person or groups of persons. I do see where this kind of system could be powerful information for someone who wants to discriminate based on already prohibited, illegal categories such as race, religion, national origin, gender, etc.  However, the language here seems overbroad.  Aren’t there potential benign and nonharmful uses that are equally being forbidden here? Why? The history of Europe and the Nazification of Germany in the first half of the 1900’s may be part of the justification in the EU for this type of language, but can’t we instead rely on adding some definition of harmful uses here? Are we really asking the attorney general to police new, potentially harmless or beneficial, categories of discrimination that have not been legislated? Maybe high-risk, but why prohibited?

 

3)        Capture of Biometric Identifiers Using Artificial Intelligence.  Is this generally prohibited for non-AI systems? I’m not aware of it. There are various restrictions related to biometric information’s use, depending on how it is acquired, but I’m not aware of blanket restrictions on its capture from public sources. In other words, one can, in general, still gather this type of information, including from the internet and any other public sources, if some other invasion of privacy or violation of trust or confidentiality is not occurring.  Why, then, do we start with AI systems? I don’t understand why, so maybe there is more reading to do. The intent seems clear enough from the language, though. That they might be considered high-risk I would understand, but not the prohibition.

 

4)        Categorization Based on Sensitive Attributes

 

The prohibition is:

An artificial intelligence system shall not be developed or deployed that infers or interprets, or is capable of inferring or interpreting, sensitive personal attributes of a person or group of persons using biometric identifiers, except for the labeling or filtering of lawfully acquired biometric identifier data.

 

Sensitive personal attributes are defined as: race, political opinions, religious or philosophical beliefs, or sex

 

My question: what’s the harm, unless it is used to unlawfully discriminate or invade privacy, or commit some other legal harm? In those cases where the biometric and other data used by the artificial intelligence system is legally collected from public sources or by informed consent, then is the algorithm’s inference of sensitive personal attributions from them a violation of privacy as that term is commonly understood in the US under current privacy laws? And are there not valid, nonharmful, nondiscriminatory, commercial and social reasons for targeting communications to certain interest groups or collections of people with common sensitive personal attributes, as they may be inferred by the system? Could we use documented consent at the time of gathering nonpublic biometric data, for use in AI-assisted market or social research and communications, assuring that this use would not overreach the stated purpose of the voluntary collection of data, and therefore did not violate privacy principles? These are not rhetorical questions, but rather things that I would like to better understand as a practitioner. I hope that the intent is not to prohibit the development of AI for social or market research that is not per se harmful.

 

5)        Utilization of Personal Attributes for Harm

 

The statute:

An artificial intelligence system shall not utilize characteristics of a person or a specific group of persons based on their race, color, disability, religion, sex, national origin, age, or a specific social or economic situation, with the objective, or the effect, of materially distorting the behavior of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.

 

This needs work. The fact situation that comes to mind is where an AI system has become so good at targeting population sectors and influencing behavior that it is used to advertise an expensive car and people who can’t afford the expensive car are strongly influenced to the point where they take out car loans, or sell something that’s not theirs, or steal, to buy the car.  Was that system so good that it was reasonably likely to cause that person or another person significant harm? Was it so powerful that people could not exercise their own judgment? So, does the individual have a cause of complaint to the attorney general to force the return of the car in the absence of some kind of fraud or misrepresentation, because the person could not control their behavior?  Without more clarifying fact situations, I’m not sure my clients who advertise or sell cars are going to know what to do with this. I’d like to hear more to understand how this is feasible.

 

6)        Emotion Recognition.

 

The statute:

Regardless of the intended use or purpose, an artificial intelligence system shall not be developed or deployed that infers, or is capable of inferring, the emotions of a natural person without the express consent of the natural person.


Why not? Amateur and professional photographers have been taking pictures of people in parks, restaurants, streets, wherever, for over a century, recording the emotional expressions, both subtle and unsubtle of their subjects. Similarly with those who record public events, family parties, etc., etc. Similarly with those who report news of public events in live streaming or broadcasts.  How does the use of AI to fine-tune our understandings of others’ emotions violate their right to privacy, if these emotions have been expressed in front of us?


7)        Certain Sexually Explicit Videos, Images, and Child Pornography.

The statute:

An artificial intelligence system shall not be developed or deployed that produces, assists, or aids in producing, or is capable of producing unlawful visual material in violation of Section 43.26, Penal Code or an unlawful deep fake video or image in violation of Section 21.165, Penal Code.

 

This one makes sense. For things that are already violations of the Penal Code, I see no reason why not to prohibit the development or use of AI systems for doing these things. I did not look up the Penal Code references, so there may be further questions there, but, so far it makes sense.

 

The Regulatory Sandbox.

In some cases above, the questions are around whether the identified harm is something needing special regulation in the case of AI.  In other words, these are legal or societal harms already addressed in the broader context of fraud, deceptive commercial practices, theft, libel, etc.  The answer to many of the questions raised above is a matter of understanding the level of risk involved in an AI system, i.e., whether such an identified harm is intrinsic to or amplified by the use of an AI system to some unacceptable extent.

To address the identification and management of risk, TRAIGA uses a sandbox program which is a regulatory framework established under [Chapter 552} of TRAIGA to allow temporary testing of artificial intelligence systems in a controlled, limited manner without full regulatory compliance.


The Texas Department of Information Resources (the department) is appointed to administer and oversee the sandbox program, and is to coordinate with the Artificial Intelligence Council (the council) in doing so.


More about the objectives of the sandbox program is in the Why Regulate? What are the Objectives? section of this paper above.  There is an application and reporting regime, and some other details of administration are provided in the bill.


AI Workforce Development Grant Program.  The final note I’ll make here is that TRAIGA directs the Texas Workforce Commission to establish and administer a Workforce Development Grant Program to develop a skilled AI workforce in Texas, working across AI companies, local community colleges and high schools. There is more detail on that in the bill, but it’s outside of the areas I wanted to address here. The point I’ll offer is that it creates a means for funding companies and schools in the development of a workforce that will be needed to support the development and deployment of AI systems in Texas by leading edge companies, and to help portions of the existing workforce to transition into a more AI-based economy.

CONCLUSION

The Texas legislature is seeking to put Texas ahead of California, Illinois, New York, Massachusetts and other states in the regulation and the fostering of an AI-literate and competent economy. This is a good thing for Texans. Texas legal practitioners and business practitioners had better get educated on what is being talked about. An elegantly written and executed AI governance framework could be a very good thing for consumers, business, the public and the economy of Texas and every other state. No regulation or poor regulation creates a risk of very high-impact, unanticipated, and unintended consequences. Please pay attention and please keep in touch with your legislators on this topic. Get good advice before advising or investing in an AI enterprise. Find out whether its principals understand the compliance requirements that are surely coming in some form, if not TRAIGA. This author thinks TRAIGA is as good a start as any, now that it’s introduced.  Let’s work on it.

 

Jonathan K. Hustis, the author, is member of Fulton Jeang PLLC, whose legal practice includes technology company corporate governance, M&A, compliance, financing, contracts, and privacy law. He is admitted to practice in Texas and federal courts in the Northern District of Texas. Jon is also a Certified Information Privacy Professional/US.

Acknowledgements:  Fulton Jeang, PLLC associate attorney Wei Wu provided review, edits and comments to the author. Wei is an attorney qualified to practice before the US Patent and Trademark Office and in the States of Texas and Minnesota. Wei is also a Certified Information Privacy Professional/US.

28 views0 comments

Recent Posts

See All

Gibson C&D on Guitar Trademark

By Wei Wei Jeang Gibson, Inc., famous for its electric guitars, has sent Trump Guitars a cease & desist letter notifying it of trademark...

bottom of page