Thursday, April 3, 2008

Computer Ethics Changes Everything

James H. Moor, What is Computer Ethics? Metaphilosophy, October 1985, Vol. 16 No. 4, pp. 266-275

Moor defines computer ethics “as the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology.” He argues that computer ethics should be a special field of study, involving general ethics and science as well as problems specific to the domain of computer technology.

Moor points out that computers are revolutionary in the sense that they have “logical malleability”: the data in a computer can be used to represent any symbol. This allows computers to become part of any activity, and when computers become an essential part of that activity, the ethical issues surrounding the activity itself have to be reexamined – not just the usefulness of computers in that activity. Moor also cites invisible abuse (unauthorized access and malicious software) invisible programming values (assumptions in the software that bias the system’s behavior) and invisible complex calculations (computer activity which cannot be completely observed and understood by humans) to support the need for the special study of computer ethics.

David S. Touretzky, Free Speech Rights for Programmers, Communications of the ACM, August 2001, Vol. 44 No. 8, pp. 23-25

Touretzky argues that software is free speech which should be protected, making the DMCA provisions He focuses on the DeCSS case in which the judge decided that code is speech but that compilable and executable code are more dangerous than protected speech. Touretzky is particularly concerned that the DMCA and this ruling are interfering with the publishing of computer science research.

Touretzky supports his argument with the testimony he gave in the courtroom: There is no clear distinction between discussing an algorithm and distributing software that implements the algorithm. Although Touretzky provided numerous examples blurring the distinction, the judge still ruled on the basis of a distinction between functional software and protected speech.

Response

Moor’s two lines of argument – that any activity changed by computer technology needs to be reexamined, and that invisibility creates new ethical issues – represent two different attitudes toward computer ethics that are now very common.

Invisibility concerns are more easy for the general public to understand because they involve clear bad guys: untrustworthy voting machines, biased search engines, viruses, spyware, phishing and so forth. These problems get lots of press, but the existence of botnets suggests the response has been inadequate. (Botnets are remote-controlled networks of hijacked computers owned by unsuspecting victims, often sold or rented for nefarious purposes on the black market. This represents a convergence of both invisible complexity and invisible abuse.) Techniques exist to control malicious software and to enforce accountability in computerized activities such as accounting, but they don’t get used.

Touretzky’s paper is an example of the more abstract justification for the study of computer ethics: computers force us to re-examine activities. This is difficult to communicate, but it is more likely to result in meaningful action than popular concern about invisible enemies inside the machine. The invisibility problem is a bit like Mary Shelley’s Frankenstein, penned when industrial technology was still a frightening power only beginning to transform human lives. The real computer revolution, however is more like the later industrial revolution of Charles Dickens where the ethical issues are not the problems of technology changing human lives but problems of human lives which have already been changed by technology.

No comments: