Invited Speakers


Overview

Feature representation is at the core of many computer vision and pattern recognition applications such as image classification, object detection, image and video retrieval, image matching and many others. For years, milestone engineered feature descriptors such as Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Histogram of Oriented Gradients (HOG) and Local Binary Pattern (LBP) have dominated various domains of computer vision. The design of feature descriptors with low computational complexity has gained lots of attention and a number of efficient descriptors including BRIEF, FREAK, BRISK and DAISY have been presented. In the past few years we have witnessed significant progress in feature representation and learning. The popularity of traditional handcrafted features seems to be overtaken by the Deep Convolutional Neural Networks (DeepCNNs), which can learn powerful features automatically from data and have brought about breakthroughs in various problems in computer vision. However, these advances rely on deep networks with millions or even billions of parameters, and the availability of GPUs with very high computation capability and large scale labeled datasets plays a key role in their success. In other words, powerful DeepCNNs are data hungry and energy hungry.

Nowadays, featuring exponentially increasing number of images and videos, the emerging phenomenon of big dimensionality (millions of dimensions and above) renders the inadequacies of existing approaches, no matter traditional handcrafted features or recent deep learning based ones. There is thus a pressing need for new scalable and efficient approaches that can cope with this explosion of dimensionality. In addition, with the prevalence of social media networks and portable/mobile/wearable devices which have limited resources (e.g. battery life, memory, storage space, CPUs, and bandwidth), the demands for sophisticated portable/mobile/wearable device applications in handling large-scale visual data is rising. In such applications, real time performance is of utmost importance to users, since no one is willing to spend any time waiting nowadays. Therefore, there is a growing need for feature descriptors that are fast to compute, memory efficient, and yet exhibiting good discriminability and robustness. A number of attempting efforts, such as compact binary features, DCNN network quantization, simple and efficient neural network architectures and big dimensionality-oriented feature selection, have appeared in top conferences (including CVPR, ICCV, ECCV, NIPS and ICLR) and top journals (including TPAMI and IJCV). The aim of this workshop is to stimulate researchers from the fields of computer vision to present high quality work and to provide a cross-fertilization ground for stimulating discussions on the next steps in this important research area.


Important Dates(Tentative)

Event Date
Paper Submission DeadlineMarch 24, 2019
Notification of AcceptanceApril 6, 2019
Camera-ready dueApril 18, 2019
Workshop (Half day)June 16, 2019 (pm)


Topics

We encourage researchers to study and develop new compact and efficient feature representations that are fast to compute, memory efficient, and yet exhibiting good discriminability and robustness. We also encourage new theories and applications related to feature representation and learning for dealing with these challenges. We are soliciting original contributions that address a wide range of theoretical and practical issues including, but not limited to:

1. New features (handcrafted features, lightweight DeepCNN architectures, deep model compression/quantization, and feature learning in supervised, weakly supervised or unsupervised way) that are fast to compute, memory efficient and suitable for large scale problems;

2. New compact and efficient features that are suitable for wearable devices (e.g., smart glasses, smart phones, smart watches) with strict requirements for computational efficiency and low power consumption;

3. Hashing/binary codes learning and its related applications in different domains, e.g. content based retrieval;

4. Evaluations of current traditional descriptors and features learned by deep learning;

5. Hybrid methods combining strengths of handcrafted and learning based approaches;

6. New applications of existing features in different domains, e.g. medical domain;


Program outline (half day)

Time Event
13:50~14:00Welcome Introduction
14:00~14:45Invited Talk (Talk 1)
14:45~15:25Oral Session (2 presentaions: 20min each)
15:25~16:25Poster Session
16:25~17:10Invited Talk (Talk 2)
17:10~17:50Oral Session (2 presentations: 20min each)
17:50~18:00Closing Remarks


Paper Submission Information

All submissions will be handled electronically via the workshop’s CMT Website. Click the following link to go to the submission site: https://cmt3.research.microsoft.com/CEFRL2019

Papers should describe original and unpublished work about the related topics. Each paper will receive double blind reviews, moderated by the workshop chairs. Authors should take into account the following:

- All papers must be written and presented in English.

- All papers must be submitted in PDF format. The workshop paper format guidelines are the same as the Main Conference papers

- The maximum paper length is 8 pages (excluding references). Note that shorter submissions are also welcome.

- The accepted papers will be published in CVF open access as wel as in IEEE Xplore.


Organizers

Dr. Li Liu
(University of Oulu & NUDT)
Dr. Wanli Ouyang
(Univeristy of Sydney)

Dr. Jiwen Lu
(Tsinghua University)
Prof. Matti Pietikäinen
(University of Oulu)


Previous CEFRL Workshop

· 1st CEFRL Workshop in conjunction with ICCV 2017

· 2nd CEFRL Workshop in conjunction with ECCV 2018


Please contact Li Liu if you have question. The webpage template is by the courtesy of awesome Georgia.