From charlesreid1

Legal aspects of artificial intelligence

The idea is to play with the notion of seeing legal reasoning as AI information processing, legal theory as AI computer science, and ultimately, law as an AI system.

Mind-machine metaphor is central to AI

Appears in jurisprudence as well

Explicit: Jerome Frank's image of judicial slot machine (judging is process wherein cases fed into hopper, crank turned, justice dispensed)

Less obvious: scales of justice (judicial mind is mechanical balance)

Sometimes hidden: Holmes's marketplace-of-ideas metaphor

Chapter 3: Indeterminacy and Open Texture


"Formalism and rule-scepticism are the Scylla and Charybdis of juristic theory: they are great exaggerations, salutary where they correct each other, and the truth lies between them.

- H. L. A. Hart, The Concept of Law (1961)



"Truth and reference are intimately connected with epistemic notions: the open texture of the notion of an object, the open texture of the notion of reference, the open texture of the notion of meaning, and the open texture of reason itself are all interconnected.

Hilary Putnam, Representation and Reality (1988)


Some issues common to AI and law arise in H L A Hart and Lon Fuller's classic debate over a mythical ordinance that prohibits vehicles in the public park. The problem is what counts as a vehicle for purposes of the ordinance. A car qualifies, except that a police car sent to handle a crime presumably does not. A motorcycle is a vehicle for purposes of this ordinance; a perambulator is not; a bibcycle, probably not; a moped, not clear.

Law professor's hypothetical: May the veterans' local arrange to have an army vehicle mounted on a pedestal and placed in the park as a statue?

Same problem can arise in AI.

(Quaint example.)

The automobile is told, "Proceed ahead one mile, then make a left at the first light." The machine dutifully proceeds one mile then continues to the next section at which it encounters four street lights: traffic signal with 3 lights, and a police car's lights.

(Quaint example because, how the hell would an AI be able to figure out how to drive but not tell when a light is a stoplight?)

Consider two contrasting metaphorical descriptions of communication through language.

In the first description, language communication is seen as a process in which a sender, the speaker or author, transmits to a recipient, the reader or listener. Speech-objects are words or concepts referring to the world and wholly containing idea to be transmitted. Bidirectional conduit. This hides aspects of language use. Mutual interaction with each other and with world, other speakers may be involved, matrix/network of contexts (immediate local to larger sociocultural).

In the second metaphorical description, language communication is envisioned as a dance. The participants are partners, Their dance is ongoing, dynamic interplay combining stylized forms with improvisation. The dance requires dancers, but also a dance floor, music, other dancers, the background or contextual ingredients.

This second metaphor contains possibilities for uncertainty. Differences in rhythm, other partner, music, room, lights, other dancers, new sections penned.

Uncertainties of significance and context are primarily associated with the speaker and listener and the background in which they converse; the various indeterminacies are primarily associated with the speech-objects themselves.

Differences in expertise give rise to differences in significance... "White's queen captures Black's pawn" may have a different significance to the novice chess player, who sees the move as advantageous, than to the grandmaster, who sees it as a misstep that will ultimately cost White the game.

When the dance of language communication flows smoothly, uncertainty is incorporated seamlessly into its improvised structure. The words "uncertainty" and "indeterminacy" seem somehow inappropriate to the dance metaphor b/c they connote the possibility and the hope that language may be made certain and determinate.

The conduit metaphor is consistent with formalist approaches in AI and jurisprudence. The metaphor underlies much of this eassy's discussion.


Stylized Examples


"The simplification inherent in a formal model is also the source of its power and utility: it will often lead us to insights that would otherwise be obscured. The unexpected consequences of our formulations may reveal surprising truths, or just as often, the inadequacy of the formulations themselves.

- L Thorne McCarty, "Reflections on TAXMAN: An experiment in artificial intelligence and legal reasoning (1977)


(What follows is a curious view of computer science from a lawyer's perspective.)


"The facts of the system are stored in clusters called frames. A frame describes a real-world object as a list of ordered pairs of properties and values. Each property is called a slot. The frame's structure is analogous to a fill-in-the-blank form or template: each slot is blank. The frame's class tells the general kind of real-world object the frame is supposed to represent. The system can easily group together different individual frames of the same class so that they may be processed in the same way.


That's some comedy gold right there.


Simulating the Vehicle in the Park

Simple implementation includes frame definitions for CAR, TRUCK, VEHICLE, PARK, TiCKET, plus four rules.

Rule 1 specifies what sort of real-world objects are vehicles

Rule 2 specifies that if VEHICLES are found in PARKS an ordinance TICKET is written

Rule 3 specifies that program ends when ticket printed

Rule 4 asks programmer to do stuff

To address the hypothetical of the cars that are allowed in the park, we want to alter the frame definitions in an attempt to represent whatever it is that makes a vehicle a vehicle


The Brittleness of Rule and Frame Formalism

"'Socrates is mortal' is hardly more than a counter of logical text-books: S is M will do just as well - or better. John Dewey, Experience and Nature (1929)

The previous example raised some issues, showed the problems of the system The problems of the system are problems of knowledge representation (organizing, storing , retrieving, manipulating computer information to address the task at hand)

The knowledge that a computer system has is not human knowledge

The vehicle in the park system required a considerable amount of definition to achieve even the simplest results.

An AI system based on a set of real-world legal rules (e.g., IRS tax expert system) requires far more

That such systems can be built suggest that formalism, despite its limitations, is powerful in suitably narrow contexts



Categorization and Connectionism


"Logic, a component of most legal arguments, fails to provide a natural framework for representing the overall process of legal analysis and argumentation.

- Donald Berman and Carole Hafner, "Obstacles to the Development of the Logic-Based Models of Legal Reasoning" (1985)


To the extent that in legal reasoning, there are usually two plausible answers to a question, namely plaintiff's and defendant's, rather than only one, connectionist and fuzzy systems may provide a closer analogy with legal reasoning than do classical systems.

Connectionist systems' ability to deal with indeterminacy may be seen as an emergent property of their distributed knowledge representations. Emergent properties are those that are exhibited by a systems as a whole but that are neither localizable to any part of hteat system nor obviously lsimilar to any local properties of system components. Douglas Hofstadter describes this phenonenon in his story of Aunt Hillary, the intelligent anthill. Even though each individual ant is dumb and knows only its local environment, the patterns of ants flowing in the hill have meaning.

The meaning is not the computational elements but their connection.

One can also imagine a connectionist system designed to handle the vehicle in the park problem. The system would be trained to distinguish between two classes of photographs of objects in the park. Photo-based system might still fail on statue example from Fuller. Video system might not.



Robots in the Park


"All too often you'll find that the difficult technical aspect of a program results from a failure of the program's task to correspond to any real task.

- Philip Agre, "The Dynamic Structure of Everyday Life" (1988)


An interactionist would... take a different approach.

He would not attempt to solve in the abstract the question of what counts as a "vehicle" for purposes of the ordinance, but instead would set about the practical business of designing a park that does not include unwanted vehicles.