Bayesian Network Research at the University of Regina

by Dr. Cory Butz

Google posted a job advertisement a few months ago seeking people to build large-scale Bayesian networks for their popular search engine. It was mentioned that word replacement is often necessary when searching the web and that Google will use Bayesian networks to do a better job than their current techniques. Bayesian networks have also been applied in building intelligent agents, such as Office Assistant, and adaptive use interfaces by Microsoft, process control by NASA and Lockheed, software diagnosis by HP and Nokia, and medical diagnosis such as the Heart Disease Program at the Massachusetts Institute of Technology and The Pathfinder Project for lymph node diseases at Stanford University.

The founder of Bayesian networks, Judea Pearl, emphasizes the importance of structure in probabilistic inference by opening his chapter on Markov and Bayesian networks with the following quotation: "Probability is not really about numbers; it is about the structure of reasoning."

We have argued that probabilistic reasoning theory without the numbers is relational database theory. For instance, the inference algorithm used in HUGIN, a celebrated probabilistic expert system, is practically identical to a semijoin program, an inference algorithm used in the relational database community. A critical difference however is that the database community discovered this inference algorithm based only on a restrictive class of dependencies (non-embedded dependencies). In contrast, HUGIN uses the same algorithm even though a given Bayesian network can constrain both embedded and non-embedded independencies. This clearly shows that the Bayesian network community is not fully exploiting the independencies available in a Bayesian network.

By utilizing independencies that remain unnoticed in all previous algorithms, we proposed the first ever Bayesian network inference algorithm that precisely articulates the structure of the probability information propagated in the network during inference. Having crystal clear structure allows us to make more intelligent decisions during inference. Using several real-world Bayesian networks, we have shown an average run-time improvement of 29% over the leading inference technique. Our research paper describing the foundation of this improvement was selected as the very first paper presented at the 2006 Canadian Conference on Artificial Intelligence.

There is a lot of other exciting research taking place in my lab. Two Ph.D. students, Mr. Hong Yao and Mrs. Shan Hua, assisted in the development of the above inference technique. Mrs. Shan Hua is also examining the visualization of Bayesian network inference. We hope this work will make Bayesian networks accessible to a much larger audience. Mr. Wen Yan, another Ph.D. student, is starting work on the application of Bayesian networks in web search. One goal is to reduce the number of web pages returned during a web search session. Mr. Ken Konkel, an M.Sc. student, is close to completing the implementation of a state-of-the-art parallel Bayesian network inference algorithm. This is especially useful as nowadays computers are being built with multiple CPUs. Ken also conducted the experiments mentioned about on a Supercomputer with 24 CPUs. Ms. Junying Chen, another M.Sc. student, will investigate how Bayesian networks can be exploited in natural language processing.

Bayesian networks and databases are two skill sets that businesses are seeking. If you are hard working and want to pursue a graduate degree studying either of these two topics, please email me at


Computer Science Research at the New Media Studio Laboratory

by Dr. Xue Dong Yang

The University of Regina New Media Studio Laboratory (NMSL) was established by a Canada Foundation for Innovation (CFI) grant in 2002. It was created to facilitate multidisciplinary research by bringing together faculty and graduate students from Computer Science, Engineering, and Media Production and Studies. As two of the nine principal investigators of the Lab, Dr. Yiyu Yao and I, along with our graduate students, have conducted several research projects in NMSL. The graduate students of my research group have given numerous demonstrations to external visitors in the past years. I highlight a few achievements below.

Most web search engines interfaces support a model of interaction based on traditional information retrieval: typing text query terms and examining a list of textural search results. Since the ability to read and assess textual information is a limiting factor in information retrieval systems, visual representatives of aspects of the user's queries as well as the search results can allow the users to more effectively interpret and make sense of the information provided. O. Hoeber, a Ph.D. student, has developed two interfaces, called HotMap (Figure 1) and Concept Highlighter, to help us understand the complexities of web search results exploration.

Visualization in Web Search

The overview map (on the right-hand side of Fig. 1) provides a compact representation of 100 or more document surrogates in a single compact view. The detailed view (on the left-hand side) shows the specific information about each document surrogate, and provides a link to the document. "Hot" documents can be easily identified at a simple glance, particularly for those documents that would be buried deep down the list by popular current web search engines. On the other hand, another tool, called VisiQ (Fig. 2) supports interactive query refinement by visually depicting the query space and allowing users to choose terms suggested by a hierarchical knowledge base to add or remove terms from their queries.

Real-time Face Detection and Recognition Techniques

The ability to prevent unauthorized users from gaining access to classified or highly sensitive data becomes increasingly important to both public and private organizations. Biometrics, particularly face detection and recognition, is a promising field of technologies for personal identification that offers solutions to a wide range of problems.

P. Kort, B. Beattie and R. Dosselman, all Master's students, have developed the technologies and prototype application tools for Defense Research and Development Canada (DRDC). These tools were built using a common webcam and specially designed real-time PC software to perform near real-time face detection and face recognition. The technology developed in this project has several potential commercial applications.


Robotics Lab

by Dr. Malek Mouhoub

Artificial Intelligence (AI) research and education can easily be framed through a paradigm of intelligent agents. This makes intelligence in robots a strong motivator of AI. This embodiment contrasts with the majority of computer science subfields, in which computers interact with the real world differently than we do. A robot is more than an experimental platform or a hook that can draw undergraduate and graduate students to the AI discipline. It is a fundamental facet of the AI research field.

In the robotics lab of our department, several fundamental and applied AI subareas are being investigated through a suitable robotics platform, including several Amigobots, Lego Mindstorms and a generic tool kit from which to run these robots. This latter system has been developed mainly by Colin Witow during the completion of his Masters' thesis and enables a user to operate and interact with different types of robots via a friendly graphic interface. The user can, for example, select a given task, the resources (robots) needed for this task, and the AI solving methods used to obtain the solution (plan) required to achieve the task. One or more plans will then be proposed to the user who will decide to simulate the execution of the planer or to execute this later using the chosen robots. Undergraduate students Vili Bogdan, Ricky Sum, Kevin Bedel, Jieshan Liu, Roger Barbour, and Qiong Wu have contributed to this project by implementing several image processing techniques and AI algorithms based on tree search, stochastic local search, generic algorithms, and neutral networks.

One of the challenging tasks we are currently conducting in the robotics lab is multiple robot motion planning in a dynamic and unknown (or partially known) environment. Motion planning algorithms for a single mobile robot have been intensively discussed in the previous years. In an environment that contains only stationary obstacles, path planning methods guarantee to return optimal paths in polynomial time, if any exist. However, planning in a dynamic environment with moving obstacles is harder and requires an exponential time cost algorithm, even for a two-dimensional space. Moreover, the problem is more challenging if we are dealing with multiple mobile robot motion planning under uncertainty. Indeed, a key issue in handling the uncertainty in an evolving world is how to model it so that it can be effectively accounted for at the multiple robot motion planning stage.

If you are interested in Robotics at the U of R, please contact me at



by Dr. David Gerhard

Computer games, movies and multimedia applications are making more and more use of high-tech audio and it is the mandate of the newly established aRMADILo to explore and develop techniques and technologies in the context of this rapidly growing research area. aRMADILo stands for the Rough Music and Audio Digital Interaction Lab and was co-founded by Dr. David Gerhard and Dr. Dominic Slezak with initial grants from the Canadian Foundation for Innovation and the Saskatchewan Innovation and Science Fund.

aRMADILo has facilities for studying audio from the waveform level through to the symbolic level. High quality microphones and a sound isolation enclosure (the cube of silence) allow for the in-situ multi-channel recording of human and environmental sounds. A collection of midi-base musical instruments, including keyboards, a drum kit, trumpet, clarinet and guitar, allow for the study of the highly detailed human control information necessary to make music. Four high-end dual-processor, dual-display workstations and 2TB file space provide the computer power required to analyze, classify, and interactively explore the work of computer audio. Cubase, Matlb and MacMSP are three of the software packages resident in the lab, adding support for recording and mixing, analyzing, and developing interactive applications. Each workstation has high quality headphones and the lab has three surround sounds systems, in 5.1, 7.1 and octophonic configurations.

Areas of investigations explored by aRMADILo include surround sound and audio realism, music information retrieval and library searching, and music interfaces and instrumentation. JJ Nixdorf is a graduate student working on new interfaces and implementations for real-time surround sound. Most modern performed music is presented in one or two channels (Mono or Stereo), but many musically interesting effects could be utilized if full spatializaton were available to performed music. Spatializaton is the modification of sounds source to come through a group of speakers at varying amplitudes providing the perception of the sound coming from a particular location. These effects are pre-composed (in the case of movies) and occasionally rendered on the fly (in the case of video games) but little work has been done to date on real-time interaction with spatialized sound sources. JJ's work involves the development of a usable interface allowing the manipulation of multiple sound sources in a virtual space, as well as the development of the underlying algorithms to move the sounds sources appropriately. His system could also be used to compose sound effects in real time, acting to the foley artist or to provide spatializaton of actor voices in a theatre piece on stage.

CS 890 CG is the graduate level Audio Topics course run by Dr. Gerhard every couple of years. In it, graduate students explore topics that relate to, but are not necessarily limited to, the mandate of aRMADILo. In this past offering of the course, several students undertook interesting projects that made use of the facilities available in aRMADILo. These projects included a predictive music visualization system that produced winamp-like visualization based on what was about to happen in the music track, thus evoking a stronger connection between the sound and the visuals; an automatic face animation system that received speech from a microphone, extracted phoneme information, and produced corresponding mouth motion; and a study of the differences in spatializaton perception between different people using the head-related transfer function.

Other research projects of note related to the lab include explorations of alternate interfaces for music and alternate use of music interface knowledge, development of music interface systems for people with disabilities, and composition of multimedia art work and instillations.

There are many opportunities for research work in aRMADILo for interested graduate students or upper-level undergrads. For more information, contact Dr. David Gehard at, explore the aRMADILo website at http:/, or drop by the lab, which is located in the lab building, room 143.

To Top of Page