Daniel B. Rodriguez is the Harold Washington Professor at Northwestern University Pritzker School of Law and served as dean of the Law School from January 2012 through August 2018.
Professor Rodriguez has taught full-time at several law schools including the University of Texas-Austin, the University of San Diego (where he also served as dean), and at the University of California, Berkeley. He has also been a visiting professor at Harvard, Stanford, Columbia, USC, and Virginia.
His scholarship and teaching spans a wide range of topics in public law, including administrative law, local government law, constitutional law, and property. He is also deeply interested in the law-business-technology interface.
A graduate of California State University Long Beach and the Harvard Law School, Professor Rodriguez has served as the Chair of the ABA Center for Innovation, a member of the ABA Commission on the Future of Legal Services, as the President of the Association of American Law Schools, and chair of the AALS Deans’ Steering Committee. He is presently a council member of the American Law Institute and also a member of various task forces working on access to justice issues.
How do you envision the role of technology evolving in legal education, and what steps should law schools take to prepare future lawyers for this changing landscape?
I think there is a “present” and a “future” element here that I would separate to make the broader point that we will need to do a better job as legal educators and as law school leaders to ensure that our students will have all the tools and to exposed to the many perspectives essential to understanding and using technology to practice law at the highest level and further justice.
On the “present,” we are witnessing a tremendously active and energetic burst of tech-related education in law schools, the likes of which I do believe are unique in the 30+ years I have been in law teaching. For example, there are courses on technology-enabled research – most of which move beyond the use and utility of the traditional tools such as Lexis & Westlaw and consider how Google has impacted research, and how large databases have given lawyers information to analyze cases, regulations, and statutes and, significantly, augmented our ability to predict legal outcomes. To take just one example from many, the ODR (online dispute resolution) movement has created the need for understanding how these technologies work to aid and even undertake decision-making as an alternative to traditional adjudicatory models. There are many other examples.
On the cutting edge of this is generative AI, including the fast-moving emergence of Chat GPT and other kinds of technology that utilize LLMs (large language models) to not only aid research and analysis, but to contribute direct information and even modalities of advocacy. Many schools – and within a couple of years I would predict most law schools – provide education in generative AI. Some do this through stand-alone courses on AI & Machine learning and its applicability to law & legal practice. Other law schools are embedding this education in traditional law school courses. Ultimately, law schools are wisely figuring out ways of ensuring that students are getting contemporary, relevant, and even essential instruction in what they will need to be successful lawyers and leaders in this third decade of our 21st century. And so, in summary, I am enthusiastic and rather bullish about what law schools are doing to improve tech-related legal education. At the same time, we need to continue to press the envelope, to activate and incentivize legal educators and other decisionmakers to ensure that these strategies become, if not already, a priority in our curriculum. It is not extra; it is essential.
How do you see recent technological advancements influencing the study and practice of public law?
As to the study, we can see how technology can enrich our understanding of how political institutions shape and implement public policy. A few examples: To study the practice of statutory interpretation, legal technology, including LLMs and other methods, can assist us in searching for how particular phrases and terms have been enacted into law and have been interpreted by courts and agencies. Lexis/Westlaw, for all its virtues, was less useful for this purpose, but new methods have really enhanced all this. Another example: Now that the Supreme Court and many lower courts have put a stake in the ground in favor of so-called originalist methods of interpretation, the role and salience of historical research has moved to the fore. Technology is being used to assist in dense historical research, not only in searching for comments by legislators and other political officials, but for filling out the context in which critical debates happened and decisions were made. This doesn’t replace the human touch in doing careful historical research, but it is undoubtedly an aid for historians, legal scholars, and teachers.
As to the practice, we are seeing federal and stage government agencies relying on insights and tools from technological developments to improve public policy. For example, governments are frequently developing and employing algorithms to make various decisions more efficient and error-proof, in variegated areas such as sentencing and government benefits administration. Algorithmic decision-making is not without its challenges to be sure, but the genie having come out of the bottle, we will surely see continuing use of new tools along these lines to improve policymaking and implementation – in ways not unlike its balanced use in health care. Another example is the use of tech-enabled decision-making – sometimes called, if hyperbolically, “robot judges.” When used to augment human decision-making, more automated modes of adjudication is tremendously valuable in administering justice in a way more fair and efficient for both the government and for ordinary citizens. Indeed, it is critical to further develop these tools in areas of huge backlogs and what has been called “high volume/low value adjudication.” This development, what the legal futurist Richard Susskind has labelled “online judging,” is enormously interesting and important.
What technological innovations do you believe hold the most promise for transforming the delivery of legal services, and what are the potential challenges in adopting these technologies?
If I had to pick a couple, I would say, echoing some of my earlier comments, I would first say that the use of technology that helps assist ordinary citizens who cannot afford lawyers seek and obtain justice is the single most important domain and a promising one. Entrepreneurs in this space have developed myriad tools, including chat-bots, legal check-ups, court navigators, document retrieval and guidance mechanisms, and others, that share in common the aspiration of reducing the barriers to justice access, which has historically been a mix of complexity and cost. Frankly, what stands in the way of an even greater use of such technologies is not the tech and not even the financial funding, but regulatory barriers that interfere with the deployment of such technologies on the grounds, which I believe are more often dubious than persuasive, that tech-enabled assistance constitutes the “unauthorized practice of law.” In short, we will need regulatory reform to remove the friction that keeps ordinary citizens from technologies that will help close the serious access to justice problem in the U.S.
The other innovation would be generative AI, as I mentioned previously. We are early in the history of this tool and we can perhaps scarcely imagine the many ways in which ever-more-sophisticated generative AI tools can transform the delivery of legal services. Using a tool such as Chat GPT to help craft compelling legal arguments and to answer questions that require rapid access to information can, when used responsibly, enhance the efficiency of lawyers, reduce barriers to access to justice, and, we hope, improve the quality of legal decisions. After all, let’s remember that the essential meaning of artificial intelligence is the use of tech tools (think especially here of tools that are computational in the broad sense of that term) to do things more quickly and effectively than human decisionmakers, the latter of whom are ultimately limited in fundamental ways. To the extent that we can employ generative AI to improve human decision-making, then progress is quite promising.
The challenges are many. First, we need to understand well the contents of these technologies, and the brutal truth is that a fairly small sliver of the population, and generally not including lawyers, really understand in any depth and detail how these technologies “work.” There is a black box quality to many law-relevant technologies, and it will be important for lawyers and others in the law space (including judges!) to get their heads around these technologies, at at least the level necessary to assess their utility, diagnose the challenges, and improve them. Second, there are some familiar ethical issues, which I will discuss more in my answer to the next question. Finally, there can be inequalities in how technology is made available and deployed. Right now, ChatGPT is basically given away for free, but that won’t be true forever. There are already gates being created around certain technologies – maybe call this the “landlording of technology” – that can create a system of haves and have nots. If we truly want to use technology to democratize legal decision-making and improve access to justice, we will need to figure out how best to deal with economic considerations and various barriers to entry. I honestly don’t have any great wise answers to these challenges, but I just mention them as examples of challenges we will need to earnestly tackle.
What are the key ethical considerations that must be addressed as technology becomes more integrated into legal practice, especially regarding client data privacy and the impartiality of AI in legal decision-making?
I cannot speak with any special knowledge about client data privacy, so I will leave that to the experts, so let me focus on the impartiality part of your question. First, it is important, as I always say to my students and to colleagues, to look at impartiality through a “compared to what” lens. Yes, there are issues of algorithmic bias that stem from various imperfections in how data is scraped and formed into algorithms. But whatever level and severity is this kind of bias, it must be compared to the biases that humans have, including in legal decision-making. Humans have various prejudices; they also can be subject to noxious external influences; they have various cognitive limitations; and they have other human frailties (e.g., they are busy, they get tired, they get sick, etc.). We know that these issues interfere with “objective” decision-making. This is partly why there is pressure to develop more automated mechanisms of decision-making. While this is no excuse for letting algorithmic bias to persist, it is important not to let the best be the enemy of the good. We should work hard to reduce and even eliminate algorithmic bias and other impediments to the ideal of impartiality in legal decision-making. But we should acknowledge that imperfect automation may in some instances be an improvement on deeply flawed legal judgment.
How can legal institutions and professional bodies better support and foster innovation at the intersection of law and technology?
By not imposing regulatory barriers and burdens that make it harder, and sometimes impossible, to use technology to improve justice and to foster better legal decision-making. This is a message for regulatory entities, such as state bars; it is also a message for a national organization such as the American Bar Association, an organization that has done obviously valuable work in improving the welfare of lawyers and in advancing the cause of the rule of law but, where innovation and technology is concerned, has often lagged behind. The ABA could be part of a collective solution; it should, at the very least, be not an obstacle to progress.
So far as other institutions in the law space are concerned (AALS, ALI, other groups), they can continue to use their bully pulpit to push for innovations that will improve access to justice and enhance legal education to ensure that the next generation of lawyers and allied legal professionals will be able to do great work in the legal space. As to law reform groups, such as the ALI and the Uniform Laws Commission, they should assist us by their measured work in improving the legal regime so as to foster excellent use of technology – “excellence” here being used in the broadest sense of the term.
Lastly, these groups can also assist by proposing and enacting the guardrails that will ensure that some of the more troubling (real or potential) elements of technology can be ameliorated. This is important, too, to ensure that we are using technology in the most responsible ways possible.