Computer Science R&D in Japan
There is rising international interest in Japanese scientific and technological
R&D, and efforts are underway by Japanese researchers to make their
work available to the public, such as via the Internet. This month, Computing
Japan takes an introductory look at what is going on inside the computer
science research labs at Japan's universities, corporations, and government-sponsored
institutions. And starting with the August issue, we will introduce a monthly
R&D page that focuses on some of the many astounding, and often esoteric,
projects being explored by Japan's researchers.
by Steven Myers
In the popular media, Japan has long had the image of a country skillful
at adopting and improving upon foreign technology, but contributing relatively
little original research work of its own. The past few years, however, have
seen a sharp increase in the attention given to Japanese research and development
(R&D) by foreign scientists and media pundits, and the global community
is starting to realize that significant R&D efforts are underway in
Japan. Several organizations and project groups have appeared recently,
aiming to expand the collection and dissemination of information pertaining
to Japanese technological research. Not surprisingly, a large percentage
of Japan's research is related to computer science and the computer industry
- especially issues pertaining to networking, distributed operating systems,
and human-computer interaction - and involves Japanese universities as well
as corporate giants such as NTT, Sony, Fujitsu, and Hitachi.
Rising international interest in Japanese R&D
In November 1993, H. U. Hoppe, a German computer scientist working for the
German National Research Center for Computer Science (GMD), reported on
his three-month trip to various research centers in Japan. Dr. Hoppe said
that his visit would serve as the pilot version for future GMD-sponsored
scientific visits to Japan, and in his report he stressed the need for foreign
computer scientists to gain "deeper insight into specific research
in Japan, in a way that goes beyond what is possible through short round-trips
or conference visits."
This view seems to be representative of the current general mood regarding
Japanese R&D. According to Dr. Hiroaki Kitano, a researcher at Sony
Computer Science Lab (CSL) and one of Japan's leading authorities on artificial
intelligence (AI), Sony's lab has seen a remarkable increase in the number
of foreign visitors in the past two years. The lab currently hosts at least
one group of visiting scientists each week from leading universities and
research centers around the world. Several computer research labs throughout
Japan contacted by Computing Japan, including Sony CSL, say that they are
inundated with requests from foreign researchers for "visiting scientist"
positions.
Why this sudden surge in foreign interest? It may be that Japanese R&D
efforts are at last becoming more visible. In a speech delivered last year
at a Japan Information Center of Science and Technology (JICST) conference,
Dr. Mary Good, an undersecretary of the US Department of Commerce (DOC)
noted that the DOC has long been committed to turning the attention of US
industry toward Japanese science and technology information. These efforts
are finally paying off in the form of an increase in foreign awareness of
such information. Dr. Good says she is greatly encouraged by the increasing
number of scientists studying Japanese, and excited about the establishment
of a Machine Translation Center in Washington, DC - a joint project of DOC
and JICST.
The impact of the Internet on technology management practices in Japanese
government and industry has been cited as another factor in the renewed
interest shown toward Japanese R&D. The wide-open nature of the Internet
is contributing slowly but steadily to a breakdown of bureaucratic barriers
that have hampered the spread of Japanese scientific information in the
past. Many Japanese are enticed by the capability for the unbiased dissemination
of information offered by the Internet, in contrast to the tight control
that government ministries have traditionally tried to exert. Today, with
the proliferation of Japanese R&D labs offering technical reports on
their work through their own World Wide Web (WWW) pages, it is easier than
ever before for a foreign computer scientist to obtain up-to-date information
on Japanese research developments.
Japanese attitudes toward R&D
Foreign scientists investigating Japan as a source for technological information
are naturally curious about the apparent differences in attitude regarding
the role of R&D that exists between the "powers that be" in
Japan and those in their home countries. Several researchers have commented
on the pronounced differences in "style" and orientation of the
computer-related research being conducted in Japan. Dr. Hoppe, for example,
notes that the necessity for basic research of a general nature (as opposed
to applied research related to specific products) is much more accepted
in Japan than it is in Europe. "It is not questioned that results of
this research are values per se, not just the ensuing industry products,"
he notes.
Researchers in Europe have often been blamed for the failure of the European
IT (information technology) industry, Hoppe says, but such arguments are
seldom heard in Japan. Computer science researchers in Japan are not only
allowed, but encouraged, to take on complex problems in fundamental areas
of computing through long-term projects whose outcome is far from certain.
Dr. Kitano supports this view, acknowledging that many of the projects currently
underway at Sony CSL might not bear fruit for another 10 to 15 years. These
projects are nonetheless actively supported by Sony Corporation, however,
he notes.
This appears to be indicative of a recent emphasis-shift in Japan, away
from product-driven R&D and toward research of a more basic nature.
Because of Japan's export-oriented economy and sector-specific interaction
between government and industry, the nation has heretofore focused on rapid
product development and technology acquisition/transfer rather than on basic
technology development. In last December's White Paper on Science and Technology,
however, the Japanese Science and Technology Agency (the highest science
and technology policy-making body in the government) called on government
agencies and private firms to make efforts in such areas as strengthening
basic research, developing originality and creativity in their staffs, and
placing importance on international aspects.
Government promotion of R&D
The White Paper states that Japan must "take more initiative in contributing
to the knowledge base of mankind and in resolving issues of global concern."
The Science and Technology Agency identifies more than 180 "fundamental
technologies" that should be developed over the next decade in order
to advance the frontier of technological R&D in Japan. Among computer-related
technologies, the areas emphasized include data standardization and database
development, integrated simulation and virtual reality technologies, human-computer
interfaces, and autonomous distributed systems.
One proposal put forth by the Science and Technology Agency to promote Japanese
R&D involves the development of advanced "fact databases."
This would require close cooperation between specialized information services,
research institutes, and academic societies. The agency also recommended
the implementation of a new policy for performance evaluation of individual
researchers, one that will give proper credit to those researchers who make
valuable contributions to databases. The agency has called on corporate
research labs to participate fully in efforts to develop these databases.
The corporate reality
In spite of such visionary recommendations, though, the Japanese government
has taken considerable criticism recently over the perceived lack of progress
by Japan in computer technology development. As evinced by the recent media
frenzy over multimedia and the Internet, as well as by recent books with
such (translated) titles as The US-Japan Multimedia War, there is an attitude
of fear among many Japanese that the nation is falling behind in a technology
race with the United States.
A recent Time magazine article advanced the opinion that virtually all of
the major elements in Japan's multimedia market are underdeveloped, due
to "over-regulation, high prices, and other innovation-stunting problems."
The Time article cites several revealing statistics, such as 96% of American
homes have access to cable television (a major part of the information-highway
infrastructure), versus 19% in Japan, and 52% of personal computers are
connected to a network in the US, compared to only 9% in Japan.
It seems evident that a large number of Japanese corporations feel the need
to develop technologies and infrastructure that will help to quickly secure
the domestic market in the areas of multimedia and virtual reality, rather
than focusing on goals that are less concrete. Indeed, according to a recent
Science and Technology Agency survey, private firms are increasing their
expectations for the results of R&D. Some 58% of the respondents demand
their researchers develop products leading to actual production, 40% support
the elimination of research that does not yield tangible results, and 39%
favor the shortening of their firm's time limit for R&D to produce results.
R&D spending down
Japanese corporate spending on R&D has been declining since fiscal year
1992, when R&D investment dropped for the first time ever. In spite
of this continued decline, however, the government is not providing direct
assistance to corporations for R&D, nor is such assistance expected.
Perhaps surprisingly, from the corporate perspective, government support
for R&D should go to promote basic research at universities and national
research centers.
Almost 70% of the corporations surveyed stated that even in the economic
recession, the relative importance of R&D in the company's overall business
strategy has increased. This response, together with statistics showing
that the number of corporate researchers has increased, leads the government
to believe that the current decline in corporate R&D expenditure is
a temporary phenomenon and will soon be reversed.
Sharing the knowledge
A large number of government organizations, academic groups, and other programs
have appeared, both in Japan and abroad, to promote the international sharing
of Japanese technological research results. Japan Window (a Web site provided
by NTT and Stanford University), the University of Arizona's JapanCS project,
and the University of New Mexico's US-Japan Center are among the organizations
providing such information. (See the sidebar for http addresses.)
Online information sources are valuable tools for foreign scientists, and
the increase in the number of researchers actually making visits to labs
in Japanese universities and corporations is seen as a positive trend (one
that is being actively promoted by several organizations within Japanese
and foreign governments). As Dr. Hoppe comments, however, there is also
a "strong and dense network of personal links between Japanese researchers
across different fields of computer science and information technology that
can help prepare visits, make new contacts, and make existing contacts more
valuable through personal relations."
In the following pages, we introduce just a few of the well-known computer
R&D labs in Japan, and describe some of the projects underway there.
And starting next month, Computing Japan will feature a monthly report on
other labs and projects.
Sources of Japanese computer R&D information on the World Wide Web
Agency of Industrial Science and Technology, Ministry of International
Trade and Industry
http://aist.go.jp
JapanCS Project, University of Arizona
http://cs.arizona.edu
Japan Information Center of Science and Technology
http://jicst.go.jp
Japan Window
http://jw.stanford.edu
National Center for Science Information Systems
http://ncsis.ac.jp
NTT
http://ntt.jp
Real World Computing Partnership
http://rwcp.or.jp
Sony Computer Science Laboratory
http://csl.sony.co.jp
Tokyo Institute of Technology
http://soc.titech.ac.jp
A recent emphasis-shift in Japan [is] away from product-driven R&D and
toward research of a more basic nature.
Navicam and Social Agent Projects (Sony Computer Science Laboratory)
Sony Computer Science Laboratory (CSL) was founded in February 1988 for
the purpose of conducting research related to computer science. Well known
in Japan for having researched and implemented Apertos (a distributed object-oriented
operating system in use at many Japanese universities), Sony CSL is currently
home to 15 of Japan's top computer scientists, all with doctorate degrees
from prestigious universities. Sony CSL's research covers a broad range
of topics, including networks, programming languages, human-computer interaction,
artificial intelligence, and complex systems. At present, the lab is also
hosting two foreign researchers, and is actively involved in joint research
projects with well-known computer R&D labs around the world.
When visiting Sony CSL, I was immediately impressed by its open and relaxed
atmosphere. None of the research staff have positions, titles, or other
signs to indicate seniority. Dr. Hiroaki Kitano, a well-known researcher
in the field of AI (artificial intelligence) and recent recipient of the
prestigious Computers and Thought Award, explains that - unlike the vast
majority of R&D labs in Japan - Sony CSL's compensation system is completely
unrelated to seniority; researchers are financially compensated in accordance
with their individual achievements. He commented that almost all of the
projects underway at the lab are conducted by, at most, two members. Graduate
students from schools such as Keio and Tokyo University provide assistance
when needed.
Most of the lab's many projects are long-term, and not related to the development
of specific Sony products. Two projects that particularly stand out are
those of the prototypes for NaviCam and Social Agent. Both are examples
of research into the field of complex systems, involving the development
of autonomous, intelligent agents.
NaviCam
NaviCam, being developed by Jun Rekimoto, is a tiny, highly portable computer
system that displays a high degree of position and situation awareness,
as well as impressive speech recognition capabilities. The goal is to develop
a system so small and unobtrusive as to be virtually unnoticeable to the
user - one that can be used to supply context-sensitive information for
a variety of situations. Rekimoto describes this type of human-computer
interaction (in which the user interacts with the real world through a transparent
computer) as "HyperReality." This contrasts, he says, with the
human-computer interaction styles such as virtual reality (in which the
computer replaces the real world) and the "ubiquitous computers"
approach (in which objects in real life become computers) being explored
through Tokyo University's TRON project.
To keep the devices light and small, the system uses wireless communication
to connect to a back-end computer, which acts as a server. The server contains
the database that stores activity information about the user and real-world
information about the current environment, and it can also act as a gateway
to other networks.
In a prototype demonstration, NaviCam was used to provide spoken information
about the user's surroundings within different parts of the building, based
on spoken queries (for example, "Where are we now?" or "Who
works in this room?"). Rekimoto predicts that within five years, such
computer systems will be as commonplace as the Walkman and other portable
audio devices.
Social Agent
The aim of the Social Agent project, being developed by Katashi Nagao and
Akikazu Takeuchi, is to create an intelligent computer interface in the
form of an autonomous agent that appears on the screen as a human face.
This agent is able to follow and contribute to conversation with human speakers
(or even other agents), using facial expressions as well as spoken language.
The agent will be able to shift its gaze from speaker to speaker and detect
communication mismatches.
The current implementation comprises two subsystems: a facial animation
subsystem that generates a three-dimensional face capable of displaying
facial expressions, and a spoken-language subsystem that recognizes and
interprets natural speech (and provides spoken output). The model of the
face is composed of some 500 polygons, and muscle movements are simulated
numerically. In the prototype implementation, the animation subsystem runs
on an SGI320VGX that communicates via an Ethernet network, with the spoken-language
subsystem running on a Sony NEWS workstation.
I observed a demonstration in which two humans were discussing their plans
for the evening. The "face" of the computer agent followed each
speaker, offering occasional suggestions and replying to questions from
the speakers about what TV programs were scheduled for the evening. The
agent also handled spoken requests for such tasks as making dinner reservations
and setting a VCR.
Nagao and Takeuchi say that future plans for the system include the simulation
of human-to-human communication in more complex social environments, which
take into account such factors as the social standing, reputation, personality,
and values of each participant. based on these studies, the pair hope to
be able to propose a set of design principles for a society of computer
agents.-S. Myers
Multilingual I/O and Text Manipulation System Project (Waseda University)
At Waseda University's School of Science and Engineering, a team of researchers
has been working since 1992 to implement a fully "internationalized"
computer system - one that can handle all of the world's writing scripts
and code sets dynamically, with a minimum of overhead. The project was conceived
in 1988 when Yutaka Kataoka, now head researcher for the project, was asked
by MIT's Robert Scheifler to investigate methods for incorporating multilingual
support into the X11R5 release of the X Window system. While the multilingual
capabilities introduced in that release were a major improvement over the
X11R4 version, Kataoka says that a true multilingual solution has yet to
be realized, although the need for such a system is growing quickly.
The Multilingual I/O and Text Manipulation System Project began in April
1992 and was renewed bythe University in March 1995. (R&D projects at
Waseda are generally granted for three-year terms; projects deemed promising
enough can be renewed). Kataoka says the project has received little corporate
funding (the only initial supporter was Omron Corporation); in fact, it
has been strongly discouraged by several big-name computer vendors who would
much rather see their own proprietary "localized" systems become
the standard. Government support has also been scarce, due in part to Waseda's
"renegade" image (which stems from the University's repeated refusals
to comply with arbitrary Ministry of Education guidelines for curriculum
and degree requirements). Nevertheless, research has progressed, and as
papers on the project have been published, more and more organizations,
including NTT and JCC (Japan Computer Corporation) are backing the project
and offering their support.
Multilingual text-processing issues
Initial research on the project involved the analysis and categorization
of global written languages and orthographies. Kataoka confesses that the
extreme variation of natural languages posed several challenges to the researchers.
For example, many languages do not use blanks spaces to delimit words, and
punctuation symbols vary greatly among languages. In addition to classifying
languages as phonogrammic (e.g., English) and ideographic (e.g., Chinese),
developers must also consider that in some languages (e.g. Arabic or Devanagari),
the symbolic forms of written characters vary based on position within a
character string. Most current implementations, say the research team, are
extremely limited because of a lack of sufficient knowledge about writing
conventions and restrictions on their I/O system development environment.
Implementation
The input method of the project's system is implemented in the following
manner: When keystrokes are input to an application, they are intercepted
by an input multiplexer module, which transfers the keystrokes to the input
manager via a communication library. The automaton interpreter within the
input manager converts the keystroke sequence to ideograms, and transmits
them to the editor module along with information on whether further conversion
is needed. The character string is then returned to the input multiplexer.
The code set converter provides the input multiplexer with the code set
specified by the application, and the input multiplexer then loads the character
string values from this code set into the buffer for a function called FGetString(
), which returns the value to the application, acting as the interface between
the application and the IM library.
A function called FPutString( ) links the application to the OM (output
method) library in a similar fashion. The code set converter converts the
string to a sequence of code points for the automaton interpreter, which
identifies the language name and generates the font for the characters to
be written. Characters requiring additional context-dependent analysis are
sent to language-specific modules, then returned to the automaton interpreter.
The information thus returned to the application includes the range of code
point strings and the font required.
Results of the multilingual I/O research
The Waseda team has already produced a large library of software tools related
to the project, including a multilingual version of FORTH, dot matrix and
outline font editors, a high-speed multilingual parser, and a multi-device
input method server. A multilingual version of Common LISP is also being
developed. The researchers have also made extensive modifications to the
X11 libraries in order to accommodate their I/O system.
Although the project will continue for a second three-year term (through
March 1998), Kataoka says that the implementation of the system will be
completed by the summer of 1996. At that time, Waseda will distribute as
freeware both the binaries and all of the source code developed throughout
the course of the project. - S. Myers
Aizu Supercomputer Project (University of Aizu)
Established in the mountains of Fukushima prefecture in April 1993, the
University of Aizu is an international university that emphasizes research
in computer software and hardware. The computer science faculty includes
a large number of foreign professors conducting research in over 20 different
labs, with an abundance of sophisticated computing equipment at their disposal.
Many of the foreign researchers are Russian, and they are playing an interesting
and significant role in establishing the direction of the university's research.
After visiting the University of Aizu, Dr. David Kahaner, a numerical analyst
who was then working with the US Office of Naval Research in Asia, commented
that "the Russian scientists here are particularly interesting; their
isolated system often produced research directions that differ significantly
from those in the West or Japan."
Multimedia Center
The university's multimedia center features an array of intriguing virtual
reality facilities, including an "artificial worlds" zone and
multimedia-aided design system, a human performance analysis system (with
multi-modal human interface), and a multimedia networks & groupware
system. A key component of this virtual reality system is a research project,
headed by Dr. Tsuneo Ikedo and Dr. Nikolay Mirenkov, called the Aizu Supercomputer.
The two scientists describe this computer as being both a special-purpose
computer to control the equipment of the multimedia center and a general-purpose
system to solve virtual reality and simulation problems.
Architecture of the Aizu Supercomputer
The Aizu Supercomputer is made up of a connected set of general purpose
microprocessors with a distributed memory arrangement - each processor has
its own physical local memory module. A single address space is supported
across all processors by allocating a part of each module to a common global
memory. The initial prototype of the system will have 1365 processing elements
and achieve a speed of more than 100 gigaFLOPS (billion floating-point operations
per second) at peak levels.
Each processing element consists of a 64-bit R4400 MIPS RISC microprocessor,
two memory management processors, sending and receiving routers, a cache
coherence controller for distributed shared memory configurations, and up
to 64MB of local SRAM with 15-nanosecond access time. The processing elements
(PEs), or nodes, of the Aizu Supercomputer are arranged in a pyramid fashion,
with 1,024 nodes on the bottom layer. PEs in the upper layers of the pyramid
can be considered as a special-purpose communication network performing
operations on "flying" data and supporting global control. Application-oriented
processors, such as those for graphics, sound, video, and text, are interfaced
with the PEs of the pyramid's bottom layer.
Software approach
The initial software being developed for the Aizu Supercomputer consists
largely of tools designed to make programming more visual and sound-oriented.
The developers are taking a multimedia approach to programming, using animation
clips, icon menus, sound, etc., to make the specification of algorithms
a more natural and intuitive process. According to Dr. Ikedo, "Sounds
will play a great role for users having a good ear for music; colors will
be preferable for users having a good eye for painting; dynamics will be
favorable for users liking speed and expressiveness, etc." After a
user describes the operations for the program to perform, he or she can
watch and listen to the result of the algorithm and thus be able to partially
debug it before execution.
Currently, the researchers have an experimental version (which they have
dubbed the VIM system) that can present the basic ideas of the technology.
They acknowledge, however, that a great deal of work is required before
they will be able to develop a practical version of the system. They say
they are greatly encouraged by the initial results, and know now exactly
what must be developed as well as the best procedure to realize their research
goals. A full 1365 PE prototype system is expected to be up and running
by the end of the year. - S. Myers
MuSiC++ (NTT Software Corporation)
Large Japanese corporations are well known for their conservative approaches
to research, and at first glance, NTT Software Corporation (a wholly owned
subsidiary of NTT), might seem to fall into this category with its development
of MuSiC++. While this graphical language for creating communications systems
seems rather staid, however, the package has broad applicability to the
development of complex information networks (developments of which are becoming
more and more common).
MuSiC++ is a CASE (computer-aided software engineering) tool being developed
for telecommunications system software creation. If all goes as planned,
it will have passed from development into production by the time this issue
is published. Essentially, the MuSiC++ language is an enhanced version of
the ITU-T standard of message sequence charts (MSCs), a method for graphing
the control flow of complex real-time and interactive applications. "While,
in the past, MSCs were mainly applied to telecommunications systems, as
information networks become more complex, this kind of development system
becomes more important," explains Hideaki Suzuki, a senior manager
of NTT Software. "As multimedia networks and World Wide Web server
networks grow, guaranteeing the integrity of the controlling system software
becomes a big problem. That's where using the enhanced MSC language can
aid a project."
Composing solutions to network problems
When designing network control software, development teams must avoid conflicts
(multiple requests for a single network resource), deadlocks (a halt in
network flow caused by an unresolved conflict), and other errors or bugs.
This need for error-free software development is the driving force behind
the movement to create a consistent environment that aids in software design.
For more than 10 years, corporations have been using MSCs to assist in development
of software. In 1992, the language of MSCs was standardized by ITU-T and
approved by other global standards organizations. The extensions provided
by the MuSiC++ language expands what was a development tool into a full-fledged
environment for creating and testing the flow control of network and real-time
software. By using a simple ladder structure to illustrate precedence and
dependence of multiple processes, MuSiC++ allows non-programmers to design
functional descriptions of the interaction between various objects. MuSiC++
also can generate a specification and description language (SDL) file that
describes in a standard graphical format the detailed specifications of
each process and object.
Software for staying in tune
Communications and network software rely on proper timing, and in developing
the software, real-time analysis is essential. Whether developing software
controllers for telecommunications networks or routing software for distributed
networks, assuring that each node behaves correctly is a Herculean task.
MuSiC++ is designed to give project managers the ability to create specifications,
while its SDL generation function provides software engineers with detailed
specifications for development. Extensions to the environment that allow
rigorous testing of SDL files by creating test suites in the international
standard test suite notation, TTCN, are being researched.
As the "++" hints, the language is based on object-oriented methodologies.
The jump from processes that signal each other to objects that send messages
to each other was a short one, according to Suzuki. The hierarchical structure
of the MuSiC++ toolset enables incremental development of different objects,
which allows project managers to be as general or as specific as necessary
when creating specifications for software engineers.
Other problems that the developers hope to address with the CASE toolset
are those that arise from having multiple development partners working on
a single communications or networking project. "We hope to give project
teams a common specification language with which to communicate," says
Suzuki. Both initial specification creation and later software maintenance
benefit immensely from using the new environment.
MuSiC for the masses
NTT Software Corporation is ramping up to market the product. The push is
to sell 100 or so systems in the first year, mainly to companies in the
US who are designing systems for the telecommunications market. "We
see the results of this project as being broadly applicable," stresses
Suzuki. "Any process control system can benefit from the MuSiC++ development
environment." With its American partner Anonymix, NTT Software Corporation
will target developers of telecommunications and network control systems
and companies creating software for Internet and Web server networks. -
J. Stone
|