The information on this page is not completed and schedule is tentative. Please check back later for more information.

  • Title: Multi-Attribute Combinatorial Reverse Auctions
    Speaker: Shubhashis Kumar Shil

    Date: October 26, 2016
    Time: 2:30pm - 3:20pm
    Place: CL 418

    Abstract: We propose a new type of Combinatorial Reverse Auction (CRA) in which multi-products with multi-units, multi-attributes and multi-objectives are considered. In our proposed CRA, the buyer should specify his requirements & constraints along with objectives and then the sellers should elicit their constraints on the attributes. To tackle the Winner Determination (WD) problem in CRA, we propose an optimization approach based on Genetic Algorithms (GAs). We integrate our own variants of diversity and elitism techniques with GA to avoid local optima and to preserve the best solutions respectively. We validate our approach through experiments using simulated data by performing behaviour tests. We also show the significant superiority of our proposed method to some well-known heuristic and exact WD techniques.

  • Title: Kernelized Fuzzy Rough Sets: Characterizing Inconsistency in Classification
    Speaker: Dr. Qinghua Hu - School of Computer Science and Technology, Tianjin University, Tianjin, China

    Date: Thursday, November 10, 2016
    Time: 2:00pm - 3:00pm
    Place: ED 191

    Abstract: Rough sets are now widely used to characterize the inconsistency of classification and regression tasks. Given a universe U, we can granulate and organize the elements in U with Relation R, and then apply the derived information granules to approximate any other granules defined in advance. In fact, most rough set models are named with the relation R, such as neighborhood rough sets, dominance rough sets and fuzzy preference rough sets. R determines the structure of the approximation space, and then has great impact on the computation of lower and upper approximations. However, there are few works systematically discussing the issue of relation generation. In this talk, we will build a bridge between kernel machines and rough sets, and show that a collection of kernel functions can be used to calculate the relations of objects. Especially, fuzzy equivalence relations can be generated with some symmetric and reflexive kernels. We integrate the kernel functions with rough sets and construct kernelized rough set models. Moreover, we design multi-kernel fuzzy rough sets to analyze multi-modality data, such as the mixture of audios, texts, images and videos. In addition, it is reported that fuzzy rough sets are sensitive to noisy samples. We develop some robust kernelized fuzzy rough set models to combat this challenge.

    After the model of kernelized fuzzy rough sets is developed, we have a question what the model can be used to address. First, the model is used to find the boundary samples in classification learning. As we know, the classification surface is generally determined by these samples. It is very useful to find these boundary samples before training. Second, we define some statistical factors for evaluating features based on the proposed model, and we call it fuzzy dependency functions. Some efficient feature selection algorithms are designed based on these functions. Finally, we propose an interesting technique for fuzzy multi-label classification. All these proposed models and algorithms are tested with real-world tasks.

    Qinghua Hu received the B.S., M.S., and Ph.D. degrees from Harbin Institute of Technology, Harbin, China. He then became a Post-Doctoral Fellow with the Department of Computing, Hong Kong Polytechnic University, Hong Kong. He joined Tianjin University in 2012 and is currently a Full Professor and the Vice Dean of the School of Computer Science and Technology, Tianjin University. He is also director of the Lab of Machine Learning and Data Mining. He has published over 100 journal and conference papers in the areas of granular computing-based machine learning, reasoning with uncertainty, pattern recognition, including IJCAI, AAAI, ICCV, IEEE Trans. on Fuzzy System, and so on. Prof. Hu was acted as the Program Committee Co-Chair of the International Conference on Rough Sets and Current Trends in Computing in 2010, the Chinese Rough Set and Soft Computing Society in 2012 and 2014, and the International Conference on Rough Sets and Knowledge Technology and the International Conference on Machine Learning and Cybernetics in 2014, general Co-Chair of IJCRS 2015. Now he is organizing a special issue in Information Sciences entitled Granular Computing Based Machine Learning in the Era of Big Data. He will organize the China Conference of Machine Learning (CCML 2017) in Tianjin and act as a PC Chair.

  • Title: On the development of a von Neumann interactive system
    Speaker: Trevor Tomesh

    Date: Wednesday, Nov. 16, 2016
    Time: 2:30pm - 3:20pm
    Place: CL 418

    Abstract: Interactive hardware systems can be defined in terms of classical von Neumann components -- input, control, memory, output and an external recording medium. In contrast to the machine described in von Neumann's "Report on EVAC," however, an interactive system requires that the structure identified as "outside recording medium" should not be treated as a passive external read/write device - but rather as a complete active von Neumann structure of its own. In this seminar, the logical consequences of this assertion are discussed at length along with an account of the development and demonstration of a "von Neumann interactive system" implementation.

  • Title: Bot Detection
    Speaker: Richard Dosselmann

    Date: Friday November 18, 2016
    Time: 2:30pm - 3:20pm
    Place: CL 418

    Abstract: Malicious bots pose a genuine threat to the online community. A malicious bot is any type of automated tool that illegitimately operates on a given site, forum, account, online game, etc. Such a bot may gather data in an attempt to duplicate an organization's information. In other instances, a bot might upload data, a move that can readily corrupt valid information. When it comes to online game play, a bot allows a user to obtain a decidedly unfair advantage over others, by acquiring additional points or extra resources, for instance. In all cases, a bot generally consumes bandwidth and interferes with normal network traffic. Perhaps the most identifying feature of a bot is its rigid and repetitive behavior. This differs from a proper human user that typically displays more random behavior. This research attempts to detect bots operating on various sites by identifying and measuring recurring sequences of actions.

  • Title: A Survey on Recommender Systems
    Speaker: Bingyu Li

    Date: November 28, 2016
    Time: 2:30pm - 3:20pm
    Place: CL 418

    Abstract: By studying patterns of behavior to know what someone will prefer among a collection of things he has never experienced, so far recommender systems have changed the way people find products, interests, and even other people. Different approaches are implemented for the overall design of a recommender system, and different techniques are used as a tool to make recommendations for the implementation of each approach. This work aims to review and classify the existing works in the recommender system research domain, as well as to provide an overview of the motivations and the final goals of my future research.

  • Title: The SOLID Programming Principles
    Speaker: Justin Cooney

    Date: November 30, 2016
    Time: 2:30pm - 3:20pm
    Place: CL 418

    Abstract: This presentation will provide an analysis of the five principles of SOLID programming: The Single Responsibility Principle, the Open-Closed Principle, the Liskov Substitution Principle, the Interface Segregation Principle, and the Dependency Inversion Principle. The presentation will provide clear definitions and examples for each principle and will demonstrate how these practices can be used to increase the maintainability and correctness of a software system's codebase. In particular, it will demonstrate how these principles can be used to write well modularized, easily unit testable code and will provide some insight into the advantages of using a top-down development approach.

To Top of Page