Zhouhan Lin

Click here to edit subtitle

Research Experience

I have been involved in several researches in Institute of Image and Information Technology of Harbin Institute of Technology since July 2011. During that time, I generally focus my research on machine learning, especially deep learning, and the CUDA parallel acceleration of related algorithms. 

Deep Learning

The once most prospective nonlinear feature extraction methods are kernel based methods and manifold learning. But they have their shortages. For the  former one, the selection of kernels is irrelevant with how the data is distributed; And the flaw blocking manifold learning algorithms from being widely applied is that they cannot be applied to large scale data processing. Then instead of using sophisticated differential geometry, another method of learning the data features is using simple but iterative methods to approximate the data distribution, i.e., deep neural networks. 

Although I got to know deep learning from Prof. Wang Gang by understanding restricted Boltzmann machines, my research on Deep Learning mainly focuses on several kinds of Autoencoders, like Denoising Autoencoder, Contractive Autoencoder, Gated Autoencoder, etc. Together with my supervisor Prof. Yushi, we introduced deep learning methods into the field of hyperspectral image processing, and proposed a new joint spatial-spectral classification method. 

During my intern at the deep learning lab, LISA, at University of Montreal, I rapidly become more familiar with deep learning algorithms and the motivations behind those models. Under the supervision of Prof. Roland Memisevic and Prof. Yoshua Bengio, I learned the theory of Generative Stochastic Networks, and applied it onto gated autoencoder. I tried inference and sampling on gated autoencoders, which learns the relation of two images.

Manifold Learning

When I commenced my postgraduate study, I learnt more methods in pattern recognition and tried to improve them.  By applying manifold learning algorithms to the feature extraction step of hyperspectral image classification, my adviser Prof. Chen and me get some steps forward. These works include a Riemannian manifold learning based kNN classifier, a supervised LLE algorithm and a  Riemannian manifold learning revised Maximum Likelihood Estimator.  All these three proposed methods yield, although in varying degrees, improvements on the classification accuracy.

CUDA Parallel Computing

To accelerate the sophisticated algorithms in Pattern Recognition and its relating areas is another topic involved in my research when I was an undergraduate student. For example, I implemented Spectral Angle Mapping algorithm on NVidia GeForce 560 using CUDA. Compared with its sequeltial conterpart written in C, the total speed up is around 80 times. I also implemented and optimized a parallel edition of Simplex Volume Algorithm using CUDA, yielding a total speedup of more than 300 times to its CPU counterpart. To compare the acceleration efficiency between different parallel architectures, I built up a 4-node cluster by both Hadoop and MATLAB Distributed Computing Service with my partner Haicheng Qu. 

During my internship in Chinese Academy of Science, Ningbo Institute of Materials Technology and Engineering, I implemented a QR factorization algorithm independently (codeson GPU equipped supercomputers - Shuguang1000. It was quite a happy experience working with the research panel there led by Dr. Jun LI.  

Bachelor's Thesis: A CUDA accelerated version of PCA

In my graduation thesis of my bachelor's degree, I inspected on Principle Component Analysis and its application on hyperspectral image classification. I independently implemented PCA in C and its parallel counterpart in CUDA without using ready-made mathematical libraries. To my best honer  this work is evaluated as the university's excellent graduation thesis.

Master's Thesis: Deep Learning based Hyperspectral Image Feature Extraction and Classification (In processing)

My master's thesis is proposed to focus on two parts. The first is of developing the interpretation of Autoencoder, which is considered more scientifical and statistical related; and the second part is about applying the model into hyperspectral classification, which includes more engineering works such as coding and tuning parameters. I will post the manuscript out on this site after I finish it.

Generally speaking, I like doing research and I think I am suited for it. When I was in secondary school, I was a winner of a series of academic Olympics competitions in Math and Physics. In my first year of university studying, I won a Freshman Foundation for research on heating radiators. All these achievements, regardless of their trivialness, have convinced and encouraged me to continue research in my twenties.