ITH - Centrum för studier av IT ur ett
vid Högskolan i Borås
[The articles are saved as PDF]
- Ulf Johansson (Guest editor)
From Ding to Überding
to the Machine Learning Issue Theme
- Melissa Terras & Paul Robertson
Intelligence to Read Ancient Roman Texts
- Jouni Smed & Harri Hakonen
A Quest for
Artificial Intelligence in Computer Games
- Tuve Löfström & Ulf Johansson
Predicting the Benefit
of Rule Extraction
A Novel Component
in Data Mining
- Olof Sundin
Ett forum för
förhandlingar om bibliotekariers professionella expertis
[Webb Based User Education]
- Patrik Svensson
Fanfiktion, datorspel och
Computer Games, and Applied Humanities]
This issue of Human
IT presents five articles proper, four of which in the
peer reviewed section, and three of which are designated to a
particular theme, namely that of artificial intelligence (AI)
and in particular machine learning. In addition, our guest
editor for the theme, Ulf Johansson (department of business
and informatics, UC of Borås), has written a commendable
introductory piece to the issue and to the topics of AI and
machine learning in general. As he there goes on to introduce
and comment on the three thematic articles, I refer the reader
to his text for further presentation. Suffice it to say that
all three provide fascinating journeys into quite diverse
domains of research: computer game studies, papyrology, and
data mining. I think this already suggests just how the
methodology and theory of AI are able to cut across
disciplines. And to an extent, this is of course what Human
IT tries to do on a broader scale: selecting
cross-disciplinary themes, topics and articles.
There are also two
extrathematic articles in this issue:
Olof Sundin continues his discussion of
user education for information seeking from the last issue of
Human IT with a study of how user education is
presented and conducted in 31 web-based guides to information
seeking made available by Nordic university libraries. In the
guides he identifies four different approaches to user
education, approaches that will also have consequences for how
such central concepts, such as information and user, are
understood. The guides are viewed as forums in which the
shaping of librarians' professional expertise is negotiated,
and the different approaches also involve different views of
what constitutes this professional expertise.
The second piece is an
article by Patrik Svensson that portraits a well-known MIT
researcher, Henry Jenkins, and his work on fan fiction and
computer games. Earlier this year, Jenkins visited Sweden and
“The Technological Texture” (quite an Ihdea for a title!), a
two-day conference at HumLab, Umeå University. He there gave
not only a funny and thought-through talk on media
convergence, but also participated in a debate on game studies
with Espen Aarseth. Both Jenkins’ talk and the debate were
filmed and are accessible online through HumLab’s web site at
brings me to present a new section in Human IT, “People
and Opinions” (P&O). In previous editorials, we have addressed
the need to open up a new section in the journal for
miscellaneous pieces of text that for various reasons do not
fit in the two journal sections so far launched (the open
section and the refereed section). In P&O, we hope to make
room for material in the outskirts or even outside of
traditional scholarly journal genres but which we still feel
is of value and interest to the readers of Human IT.
The section will contain e.g. opinionated and argumentative
pieces, interviews and portraits, popular science articles,
minor classics and classical minors, reports from conferences
and other events, or other texts of significant intellectual
value that have found no permanent place in print or on the
We aim to publish at
least two more issues of Human IT this year, of which
one has a theme, “Dynamic Maps”, while the other does not.
Looking further ahead, we’re currently planning for upcoming
issues and novel themes and warmly welcome article
contributions as well as reader suggestions for future themes
and journal improvements. As for the latter, finally, work has
been going on during the winter to provide rich Dublin Core
metadata to all material published in Human IT, and to
create new navigational, metadata-based aids for the readers,
including keyword and author indices as well as a list of
abstracts for the entire volumes and issues of the journal.
Clicking an item in each index will direct the reader to the
full-text article referenced to. These aids will be made
available on the journal web site later this spring.
Borås in March 2005
Mats Dahlström, Editor
From Ding to Überding
An introduction to the
I went to a blistering cold Stockholm to finally see my
long-time heroes Kraftwerk perform live. The concert
was a massive experience, full of contrasts. For most of the
time, the four German gentlemen (all well over 50 years old)
stood completely motionless, while suggestive,
state-of-the-art graphics were projected on large screens
behind the stage. Already in the first song, the legendary
“Man Machine”, the overall theme of technology and mankind was
well established. It suddenly became very obvious to me, being
an AI researcher, that these guys were able as early as 1978,
when Man Machine was written, to (maybe accidentally) pinpoint
the goal of AI.
The lyrics of
“Man Machine”, as in most Kraftwerk songs, are tiny and
repetitive. More specifically, the words ‘man machine’ are
repeated almost endlessly, but sometimes the lines
halb Wesen – halb Ding
halb Wesen – halb Überding
And there it is; an artifact partly human,
with the other part being better than a machine, a
supermachine. To me this says in four words what AI
researchers have tried to achieve from its beginnings. This is
what Good (1965) referred to as “the ultraintelligent
computer”. Obtaining mechanical intelligence on a human level
is not the ultimate goal. Unfortunately the goals have nearly
always been either to mechanically mimic human
intelligence or to create a machine capable of
performing tasks that would require intelligence, if performed
by a human.
Two months later I attend an AI conference
in the always hot and sunny city of Miami, Florida. The
keynote speaker, professor Edward Feigenbaum from Stanford, is
one of the most famous and respected pioneers in the entire
field. The topic of his talk is, loosely put, “Some Grand
Challenges for Computational Intelligence”, and is based on a
highly interesting paper (Feigenbaum 2003). To set the stage,
professor Feigenbaum starts by reasoning about the Turing Test
(TT) (Turing 1950).
In a nutshell the TT is a test determining
if an artifact is capable of showing a behavior so similar to
human behavior that the term intelligence would be
appropriate. More specifically, the test is an imitation game
where an interrogator is allowed to ask any question both to
the artifact being tested and to a human. The restrictions of
the game are normally that the interrogator, the artifact and
the human are placed in separate rooms, and that the
communication is in written text, transformed electronically.
If, after the questioning, the interrogator is unable to
distinguish the artifact from the human, the artifact is
deemed intelligent. The TT thus clearly focuses behavior only,
and disregards the question of whether or not the interior processes of
the artifact resembles human processes.
Although Turing himself thought that the
imitation game would be beaten before the year 2000, we are
still nowhere near an artifact being able to pass even a few
minutes of a TT. As a matter of fact, most AI researchers
would reluctantly regard the TT as a long term vision
rather than as an operative goal worth striving for. The
simple truth is that the TT is so overwhelming that only
rarely does one even discuss what kind of skills it would
require. However, Feigenbaum does list several high-level
skills, taken from a paper by Gentner (2003), as definite
parts of human intelligence, such as:
the ability to draw abstractions from
the ability to reason outside the current
the ability to reason analogically
the ability to learn and use external
symbols to represent numerical, spatial or conceptual
the ability to invent and learn terms for
abstractions as well as concrete entities.
According to Feigenbaum, substantial
progress has been made in most of the skills listed, although
the level necessary to pass a TT is still way out of reach.
For example, no artifact is able to “read and understand as
well as a human”. More specifically it is the understanding
part that fails, while computer linguists have come up with
very clever methods for the basic “reading”.
Feigenbaum goes on to discuss one of his
favorite subjects, Expert Systems (ES), i.e. reasoning systems
tailor-made for specific difficult tasks limited to very
narrow domains. In a way, ES are at the opposite end of AI
compared to efforts ultimately trying to pass the TT. ES are
focused on specific and well specified tasks, something
tremendously different from the extremely wide and
unpredictable TT. In addition, the performance of ES is totally
dependent on the (human) knowledge encoded in the system.
Nevertheless, Feigenbaum calls ES partially intelligent
artifacts, and claims that the major lesson learned from
the golden age of ES is that artifacts must have extensive
knowledge of the domain to be able to compete with humans
on complex tasks. According to Feigenbaum, the power of an
intelligent artifact must lie in the knowledge base and not in
the reasoning methods. This is a strong claim, and it strikes
directly at one of the major disputes within the AI community.
During Feigenbaum’s talk in Miami this is further stressed
when he debates the subject with professor Tom Mitchell from
Carnegie Mellon. Professor Mitchell, the author of the
standard textbook Machine Learning (1997), is a
well-known representative of the field of machine learning.
Although the tone of the argument is polite and humorous, it
is obvious that this really is a heated topic. In his paper, Feigenbaum calmly states that “we now have an overabundance of
logically powerful and elegant methods”. When returning to the
TT, Feigenbaum concludes that the major obstacle for an
artifact being able to pass the test is the span of the huge
knowledge base needed. “Acquiring such a large
computer-useable knowledge base is a Very Grand Challenge”.
In accordance, professor Feigenbaum
introduces another test, “a more manageable task”, which he
calls challenge #1 - the Feigenbaum test (FT). The FT
is similar to the TT but now the human player is an elite
scientist (member of the National Academy) in a preselected
field, while the interrogator is another member of the
National Academy in the chosen domain. Obviously the
questioning is now limited to the specific domain, but may
include problem solving, explanations and theories.
It is hard to say whether the FT is really
that much easier than the original TT. It is nevertheless an
interesting idea, and professor Feigenbaum does indeed think
that an artifact will pass the FT within 25 years. Arguably,
even more interesting are the two “extra” grand challenges he
proposes. Since the key to any intelligent system according to
Feigenbaum is the knowledge base, efficient knowledge
acquisition becomes crucial. Therefore, challenge #2 is to
build a large knowledge base by reading text (reducing the
knowledge engineering effort by one order of magnitude) and
challenge #3 is to distill a large knowledge base from the
WWW, reducing the effort by many orders of magnitude.
Is the AI field up to these challenges? I
do not know. 25 years seem to be a very short time, but then
again, AI has constantly produced amazing results, in very
diverse domains. A computer vision steering system can be
trained to steer a car just by “observing” a human driver.
Medical ES are able to perform at the level of expert
physicians in several areas of medicine. There are programs
capable of language understanding and problem solving, making
them able to outperform most humans on crossword puzzles.
Nettalk (Sejnowski & Rosenberg 1987) used neural network
technology and only 1024 more or less randomly selected words
to train on when learning to pronounce written English text.
Nettalk performed on a level comparable to specialized
programs that had required several man years to build.
Many of my favorite results come from the
field of game play. The best chess players in the world are
now regularly beaten by software opponents. Perhaps the most
fantastic result is still when Gerald Tesauro (1995) trained
an agent (TD-Gammon) to play backgammon, starting with the
agent only knowing the rules of the game and the concepts of
winning and loosing. Using a variant of reinforcement
learning, TD-Gammon learned to play at world champion
level using only self-play (i.e. playing a copy of itself). In
fact, human players of all ranks now learn from how the agent
plays. Is not that another twist to machine learning?
In the introductory AI course at the
University College of Borås we normally assign the students to select
a game (such as Othello, Four-in-a-row or Dots-and-Boxes) and
train an agent using techniques similar to that used by
TD-Gammon. Every year the students become equally surprised
when the agent, after sufficient training, turns more or less
unbeatable. Sometimes we discuss this result in connection to
“Lady Lovelace’s objection” to the TT, i.e. that machines are
only able to do what they are programmed to. If that is the
case, how can the students produce an agent obviously playing
much better than its creator? To me learning is probably the
most important key to successful intelligent agents in the
future. Only if an artifact is able to actually learn (from
experience or other sources) and generalize from the knowledge
learned can we talk about machine intelligence.
So when Kraftwerk in their perhaps most popular song
say “We are programmed just to do anything you want us to”, it
means just that. Robots, machines, agents, artifacts or
whatever we choose to name them will eventually be able to do
things unimaginable today. From that perspective both the TT
and the FT are just milestones on the way to the Überding.
So where is AI today and in what direction
should we move? Personally I think that we must balance the
immediate need for many intelligent applications against more
distant goals. Today, AI techniques are tools in many
practitioners’ toolboxes. This is an important factor for AI
project funding, but also for generating respect for the field
in general. Data mining, strong computer opponents in games,
more or less intelligent robots and ES in all kinds of domains
are only four examples where AI techniques are already
integral parts of systems in use. At the same time I think
that also in the future, AI will need the “hype” and coolness it
has enjoyed over the years. If nothing else, some of the
sharpest brains have been attracted to AI just because of the
overall ambition to create intelligence. From this
perspective, it is important to maintain some of the magic and
science fiction around AI. It isn’t simply a mix of
mathematics, engineering and logic. It is the ultimate goal
for computer science. Professor Feigenbaum calls CI
(computational intelligence) the destiny for computer science.
He even compares it to the “manifest destiny” i.e. when the
vision of a United States spanning from the Atlantic all the
way to the Pacific inspired generations of settlers to embark
on adventurous journeys to move the frontier forward, closer
to the ultimate goal. Professor Feigenbaum concludes his paper
with the following paragraph:
is the manifest destiny of computer science, the goal, the
destination, the final frontier. More than any other field of
science, our computer science concepts and methods are central
to the quest to unravel and understand one of the grandest
mysteries of our existence, the nature of intelligence.
Generations of computer scientists to come must be inspired by
the challenges and grand challenges of this great quest.
This special volume of Human IT focuses on AI and
machine learning. The call for papers was intentionally very
broad to encourage a mixture of submissions, showing the
fascinating diversity of AI. Judging by the three papers
accepted for publication this goal has indeed been met. The
first paper by Terras and Robertson describes an impressive
application where sophisticated AI techniques are brought to
assist in reading ancient Roman tablets which have only small
fragments of text extant and are therefore notoriously hard to
decipher. Can machine intelligence help us make these so far
dumb artifacts speak? The work described by Terras and
Robertson blends cutting edge research from both the
humanities and the technology sciences, making it a fine
example of the kind of cross-disciplinary work Human IT
strives to present. The second paper by Smed and Hakonen
thoroughly describes (computer) games focusing on synthetic
players; i.e. all computer-controlled actors in the game. Especially interesting
is the concluding prediction that the behavior aspects of the
synthetic player will be even more important in the future. Is
not that an interesting task for AI researchers? The third
paper, written by Löfström and myself, is a meta-learning
study, focusing on whether some interesting properties of a
data mining problem can be predicted from characteristics of
the data set. The motivation for this study is that data
miners often have to take vital decisions early in the process
and that some of these design decision are better left to the
When I ride the subway
after the Kraftwerk concert I read the Metro
newspaper. In an article I find that a major Swedish retailer
is accused of gathering excessive data from customers using
the retailer’s credit card to pay. Again it strikes me that
Kraftwerk wrote songs about this more than 20 years ago.
In “Computer World” they recognize that data can and will be
transformed to knowledge, and express the accompanying anxiety
with the lines: “Interpol and Deutsche bank, FBI and Scotland
Yard, CIA and KGB control the data memory”. Even more
fascinating is their prediction in the same song that the
“computer world” will really consist of “Business, numbers,
money, people, communication, time, medicine, entertainment”.
From this it must be very obvious that AI will only increase
in significance for the years to come.
After the talk by
professor Feigenbaum I feel extremely excited and proud to
work in the field of AI. Maybe AI really is the holy grail of
computer science. President Kennedy said, when establishing
another manifest destiny: “We choose to go to the moon in this
decade and do the other things, not because they are easy, but
because they are hard”. Isn’t that the perfect motivation?
Let's strive for the Überding, just because it is so hard.
Feigenbaum ended his speech by saying something like “AI is a
wonderful thing because it can do wonderful things”. Actually
I would like to add to that and conclude this introduction by
saying that AI is indeed a wonderful thing but not only
because it can do wonderful things but also because it is done
in wonderful ways!
Feigenbaum, Edward A.
(2003). “Some Challenges and Grand Challenges for
Computational Intelligence.” Journal of the ACM 50.1:
Gentner, Dedre (2003).
“Why We’re So Smart.” Language in Mind: Advances in the
Study of Language and Thought. Eds. Dedre Gentner & Susan
Goldin-Meadow. Cambridge, MA: MIT Press. 195-235.
Good, Irving J.
(1965). “Speculations Concerning the First Ultraintelligent
Machine.” Advance Computation 6: 31-38.
Mitchell, Tom M.
(1997). Machine Learning. New York: McGraw-Hill.
Sejnowski, Terence J.
& Charles R. Rosenberg (1987). “Parallel Networks that Learn
to Pronounce English Text.” Complex Systems 1: 145-168.
(1995). “Temporal Difference Learning and TD-Gammon.”
Communications of the ACM 38.3: 58-68.
Turing, Alan M.
(1950). “Computing Machinery and Intelligence.” Mind 59:
The group is
actually replaced on stage (by
robots of course) for this specific song.
[Return to text]