Your browser is unsupported

We recommend using the latest version of IE11, Edge, Chrome, Firefox or Safari.

Photo of Cetin, Ahmet Enis

Ahmet Enis Cetin, Ph.D.

Research Professor

Department of Electrical and Computer Engineering

Contact

Building & Room:

3009 ERF

Address:

842 W. Taylor St., Chicago, IL 60607

Office Phone:

(312) 996-4971

Email:

aecyy@uic.edu

Related Sites:

About

Professional Achievements

Fellow of IEEE

2015 to present: Member, Turkish Academy of Sciences

Editorial Board Member, IEEE Signal Processing Magazine, 2013-2016.

Special Issue Editor, Signal Processing for Ambient Assisted Living System, IEEE Signal Processing Magazine, 2016.

Assoc. Editor, IEEE Trans. on CAS for Video Technology, 2014- 2016.

Editor-in-Chief, Signal, Image and Video Processing, Springer-Nature, SCI Impact Factor 1.8, March 2013-

Editorial Board Member, Signal Processing (EURASIP: European Signal Processing Society), 2006-2010.

Selected Grants

INTEL LABS, Scalable Multibit Precision In-SRAM Deep Neural Network Processing by Co-designing Network Operators with In-Memory Computing Constraints, Co-PI

Selected Publications

Pan, H., Badawi, D. and Cetin, A.E., 2021. Fast Walsh-Hadamard Transform and Smooth-Thresholding Based Binary Layers in Deep Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4650-4659).

Badawi, Diaa, Hongyi Pan, Sinan Cem Cetin, and A. Enis Cetin. “Computationally Efficient Spatio-Temporal Dynamic Texture Recognition for Volatile Organic Compound (VOC) Leakage Detection in Industrial Plants.” IEEE Journal of Selected Topics in Signal Processing (2020).

Pan, H., Badawi, D., Zhang, X., & Cetin, A. E. (2019). Additive neural network for forest fire detection. Signal, Image and Video Processing, 1-8.

Muneeb, Usama, Erdem Koyuncu, Yasaman Keshtkarjahromd, Hulya Seferoglu, Mehmet Fatih Erden, and A. Enis Cetin. “Robust and Computationally-Efficient Anomaly Detection Using Powers-Of-Two Networks.” In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2992-2996. IEEE, 2020.

Shahdloo, M., Ilicak, E., Tofighi, M., Saritas, E.U., Çetin, A.E. and Çukur, T., 2019. Projection onto epigraph sets for rapid self-tuning compressed sensing MRI. IEEE transactions on medical imaging, 38(7), pp.1677-1689.

Kapu, H., Saraswat, K., Ozturk, Y. and Cetin, A.E., 2017. Resting heart rate estimation using PIR sensors. Infrared Physics & Technology, 85, pp.56-61.

Töreyin, B. Uğur, Yiğithan Dedeoğlu, Uğur Güdükbay, and A. Enis Cetin. “Computer vision based method for real-time fire and flame detection.” Pattern recognition letters 27, no. 1 (2006): 49-58.

Cetin, A.E., Gerek, O.N. and Yardimci, Y., 1997. Equiripple FIR filter design by the FFT algorithm. IEEE Signal Processing Magazine, 14(2), pp.60-64.

Jabloun, F., Cetin, A.E. and Erzin, E., 1999. Teager energy based feature parameters for speech recognition in car noise. IEEE Signal Processing Letters, 6(10), pp.259-261.

Erzin, E. and Cetin, A.E., 1993, April. Interframe differential vector coding of line spectrum frequencies. In 1993 IEEE International Conference on Acoustics, Speech, and Signal Processing (Vol. 2, pp. 25-28). IEEE.

Notable Honors

2012, Best paper award, INTERNATIONAL CONFERENCE ON PROGRESS IN CULTURAL HERITAGE PRESERVATION

Education

Ph.D. Systems Engineering, University of Pennsylvania 1987

M.S.E. Electrical Engineering, University of Pennsylvania, 1986

B.Sc. Electrical Engineering, METU, Ankara, Turkey, 1984

Research Currently in Progress

Dr. Cetin's research interests are in the areas of inverse problems, biomedical image processing, computational camera design, ambient assisted living sensors and systems, agricultural systems, computer vision, and environmental monitoring systems.

Cyber-Physical Environment Monitoring Systems: I have been developing Cyber-Physical Systems (CPS) for environment monitoring. My students and I developed a computer vision-based wildfire detection system using ordinary visible range Pan-Tilt-Zoom (PTZ) cameras placed on forest-watch towers in 2008 and 2009. Since smoke rises above the crowns of trees immediately after the wildfire starts it is possible to use ordinary cameras to spot the smoke. We modeled the smoke using (i) wavelet domain texture parameters, (ii) the covariance matrix of slow-moving regions, and (iii) gray color information and developed an adaptive machine vision algorithm capable of detecting smoke in the video in real-time. The resulting system is a low-cost system because it does not use the expensive infrared (IR) technology. We received funding from the Ministry of Environment in Turkey and the European Commission (FIRESENSE: https://cordis.europa.eu/result/rcn/143051_en.html). We received the best paper award describing our work from a conference organized by UNESCO and the European Union in 2012. The Turkish General Directorate of Forestry installed the wildfire detection system to more than 100 forest look-out towers in Mediterranean Turkey. We currently have wildfire detection systems in many countries including the US, Cyprus, Greece, Korea, and Singapore. As a part of the “FIRESENSE” project, we also developed differential infrared flame detection sensors for fire-sensitive areas and historic buildings. We deployed a wireless camera and sensor network in the ancient city of Rhodiapolis by the Mediterranean [14,15,20]. I plan to develop a tethered balloon (or drone)-based wildfire detection system using regular and infrared cameras. Currently, my students and I are working on computationally efficient deep learning methods for wildfire detection, and we are converting our classical computer vision-based algorithm to a deep-learning-based approach. We developed a novel neural network based on a multiplication-free operator related to the l_1 norm. I am currently collaborating with a start-up (Volant-Aerial) to develop wildfire detection software from a tethered balloon. Our work is funded by an NSF SBIR grant. We were invited to present our work in the “Tactical Fire Remote Sensing Advisory Committee Meeting” organized by USDA and NASA in December 2021.
In 2017, I received an NSF CPS grant entitled “Constantly on the Lookout: Low-cost Sensor Enabled Explosive Detection to Protect High Density Environments," together with Profs Alex Orailoglu (UCSD), Sule Ozev (ASU) and Chengmo Yang. We recently started working on volatile organic compound (VOC) vapor leak detection using infrared sensors and chemical sensors in our NSF CPS project [21]. In this project, we envision a CPS made up of large numbers of ordinary people using low-cost chemical sensors coupled to smart phones with the necessary computation and communication capabilities. Each person with a sensor will be a node of the CPS network. This project will develop algorithms, and methodologies for engineering of a cyber-physical system that can detect small amounts of chemical threats in the air to protect a given area. We developed machine learning based methods eliminating the “sensor drift” problem in chemical sensors and making infrared sensors more reliable using machine learning [16],[19]. We will use the same framework to develop a novel CPS to detect methane leaks in urban environments. Natural gas -mostly methane- leaks into the air and it is a major problem for the climate.
We will develop neural networks based on our computationally efficient Multiplication-Free (MF) operator related to l^1 norm [3],[4] using in-memory computing constraints for edge applications [24]. My colleague Amit Trivedi and I received a research grant from INTEL Labs entitled “Scalable Multibit Precision In-SRAM Deep Neural Network Processing by Co-designing Network Operators with In-Memory Computing Constraints'', 2020-2023 [24]. Our networks are energy efficient because they avoid the use of multiplications. We will develop energy efficient neural networks for CPS systems requiring edge computation.
Smart Homes and Sleep Monitoring: My students and I developed a CPS consisting of vibration sensors and Passive Infrared (PIR) infrared sensors for elderly care. We received a research grant from Turkish Telecom (TT) as a part of their intelligent home project. Our goal was to detect inactivity and/or unexpected stumbles and falls in a house. We designed a sensor network system which automatically calls a call center whenever there is an unusual activity in the “intelligent” home. We integrated embedded processors to “dumb” sensors and extracted intelligent information from sensor data in our research. We organized a special issue on Signal Processing for Assisted Living that appeared in IEEE Signal Processing Magazine in March 2016 [10-12]. We noticed that the time varying PIR infrared sensor signal was sensitive enough to detect chest motion due to breathing. We submitted a patent application describing a sleep monitoring system consisting of sensors, microphones and vibration sensors and a wireless transmitter together with Prof. Yusuf Ozturk (SDSU) in 2018 [13]. We plan to analyze sleep using this “smart” monitoring system. Our system can estimate the respiration rate of a sleeping person and can detect sleep apnea and wheezing. Other applications of the IR sensor include epileptic seizure detection during sleep [22]. We received an NSF I-Corps grant together with Prof. Ozturk in September 2018 for sleep monitoring and received the NSF I-Corps training in 2019.

Camera Design and Embedded Vision Systems: I have been developing algorithms and software for the video surveillance industry since 2000. I have four US Patents and several US patent applications. These patents are assigned to Grandeye (http://www.oncamgrandeye.com/). which has offices in Bridgewater, NJ, UK and Ankara, Turkey. I am one of the founders of Grandeye, Ltd which produces wide-angle cameras and digital video recording systems. The camera has a built-in embedded graphics processor. Distorted fisheye video is corrected in real-time using the GPU. Grandeye camera received two design awards in ISC West Trade Fair in 2006, Las Vegas and IFSEC Trade Fair in Birmingham, UK in 2005. Grandeye was acquired by ONCAM in 2013. I developed intelligent video analysis and computer vision algorithms for wide-angle video such as camera sabotage detection systems, automatic zoom, people, and face tracking algorithms for Oncam-Grandeye. Oncam-Grandeye buys hard drives from Seagate Technology to store digital video. I received research grants from Seagate Technology to detect unusual events in video in 2018 and 2019. We developed a computationally efficient neural network to detect unusual events in video [23].
Biomedical Image Processing: I received a research grant “Microscopic Image Processing, Analysis, Classification and Modeling Environment (http://miracle.ee.bilkent.edu.tr/)”, from the European Union’s Seventh Framework Programme for Research (FP7) in 2010. We collaborated with the Biomedical Informatics Department at the Ohio State University in this project. My graduate students visited the Clinical Image Lab of my former Ph.D. student Prof. Metin Gurcan (now Professor at Wake Forest Medical School, NC) between 2010 and 2014. This research project was successfully completed in March 2014. We received a follow-up grant “Computer Assisted Fluorescent Microscopy System and Applications” together with the Fraunhofer Institute, Germany in Dec. 2013. This research was jointly funded by the German Federal Ministry of Education and Research (BMBF) and the Turkish NSF (2014-2016). We developed automatic classification algorithms for fluorescence microscopy images [2],[3]. Our image feature extraction algorithms use a multiplication-free operator [3]-[5]. This operator leads to a “vector product”, which produces more robust “correlations” under noise because the vector product of a vector with itself is proportional to the l_1 norm of the vector. We also showed that the Gram matrix constructed using this operator is positive-semidefinite and the vector product defines a Mercer-type kernel. We plan to collaborate with Prof. G. Mutlu and Dr. Rengul Atalay at University of Chicago-Medicine to analyze microscopic images using our new feature extraction schemes. We will also develop interpretable deep neural networks using our robust operators [3-5], [24],[25].
Speech and Sound Analysis: We contributed speech coding standards in the 1990's. We developed vector quantization (VQ) algorithms for Line Spectral Frequencies (LSF) in the early 1990s. In his speech coding book, NTT Engineer Chu [6] describes our work as follows "Adaptive predictor and split VQ were proposed by Erzin and Cetin [1993]. The LSF quantizer of the G.729 was originally proposed in Kataoka et al. [1993, 1994] and later incorporated in the design of a CS-CELP coder [Kataoka et al., 1996]. NTT researcher Kataoka references our paper [Erzin Cetin, 1993] in his paper describing the toll-quality G.729 standard as follows: "In CS-CELP, since the frame length is as short as 10 ms, we can exploit the redundancy arising from the correlation between consecutive frames [Erzin, Cetin 1993], [Xydeas]."
I developed speech and sound recognition systems in the past [7-9]. I have been working with an orthopedist to analyze spine sounds. This study aims to investigate spine sounds from a perspective of their use in diagnosing degenerative disease of the lumbar spinal column [26].
Inverse Problems: Early in my career I developed algorithms for inverse signal and image retrieval problems using the method of Projections onto Convex Sets (POCS). I was elected a Fellow of IEEE for contributions to “signal recovery and image analysis algorithms" in 2009. Recently, I defined new convex sets using the epigraph sets of widely used convex regularization functions such as the l_1 norm and the Total-Variation (TV) function. In this way, it is possible to incorporate projections onto the epigraph set of the l_1 norm, or the TV into iterative algorithms converging to the set of feasible solutions of the inverse problem. We applied this new approach to the deconvolution problems in microscopic and MRI imaging [1],[18]. We were able to recover high quality fluorescent microscopy images from blurred originals using the assumption of sparsity in wavelet domain and phase invariance [1]. We also used the epigraph set of l_1 norm in time-delay estimation in passive radar systems and estimation of Wigner-Ville distribution. Time-frequency distributions of most practical signals are also sparse distributions. As a result, the epigraph sets of the l_1 norm or TV function provide robustness against noise during the reconstruction process. We will embed epigraphic projections onto iterative inverse imaging algorithms using deep neural networks using the concept of sparse coding.
Energy Efficient Signal and Image Processing, Deep Neural Networks, and Beyond:
Multiplication operations in a regular inner product are costly in terms of energy consumption when the dimension is large. We estimate that a multiplication operation consumes about 4 times more energy compared to an addition in compute-in-memory (CIM) implementations [24]. Our l_1 norm inducing Multiplication-Free (MF) operators based on additions and sign operations will make robust and energy-efficient Principle Component Analysis (PCA) feasible in large scale data sets. Beyond PCA, we also propose to apply the MF vector products to edge computing, signal and image processing problems at the edge, graph-based recommender systems requiring power iterations, and different neural network architectures to increase robustness and reduce power consumption. Some initial work in this context can be found in [26],[27].

References:
[1] M. Tofighi, O. Yorulmaz, K. Kose, D.C. Yildirim, R. Cetin-Atalay and A. Enis Cetin, "Phase and TV Based Convex Sets for Blind Deconvolution of Microscopic Images," in IEEE Journal of Selected Topics in Signal Processing, Vol: 10, Issue: 1, pp: 81-91, Feb. 2016.
[2] Aichinger, Wolfgang, Sebastian Krappe, A. Enis Cetin, Rengul Cetin-Atalay, Aysegül Üner, Michaela Benz, Thomas Wittenberg, Marc Stamminger, and Christian Muenzenmayer. "Automated cancer stem cell recognition in H and E stained tissue using convolutional neural networks and color deconvolution." In Medical Imaging 2017: Digital Pathology, vol. 10140, p. 101400N. International Society for Optics and Photonics, 2017.
[3] Badawi, Diaa, Ece Akhan, Ma'en Mallah, Ayşegül Üner, Rengül Çetin-Atalay, and A. Enis Çetin. (2017) "Multiplication free neural network for cancer stem cell detection in H-and-E stained liver images." In SPIE Commercial+ Scientific Sensing and Imaging, pp. 102110C-102110C. International Society for Optics and Photonics, 2017.
[4] Arman Afrasyabi, Diaa Badawi, Baris Nasir, Ozan Yildiz, Fatos Yarman-Vural, Ahmet Enis Cetin 'NON-EUCLIDEAN VECTOR PRODUCT FOR NEURAL NETWORKS,” IEEE ICASSP, Paper #3421, May 2018, Calgary, Canada
[5] Oğuz, O., Çetin, A. E., & Atalay, R. Ç. (2018, January). Classification of Hematoxylin and Eosin Images Using Local Binary Patterns and 1-D SIFT Algorithm. In Multidisciplinary Digital Publishing Institute Proceedings (Vol. 2, No. 2, p. 94).
[6] W.C. Chu's book "Speech Coding Algorithms: Foundation and Evolution of Standardized Coders", John Wiley & Sons, 2003
[Erzin, Cetin, 1993] E. Erzin and A. E. Cetin, "Interframe differential vector coding of line spectrum frequencies," Acoustics, Speech, and Signal Processing, 1993. ICASSP-93., 1993 IEEE International Conference on, Minneapolis, MN, USA, 1993, pp. 25-28 vol.2.
E. Erzin and A. E. Cetin, "Interframe differential coding of line spectrum frequencies," in IEEE Transactions on Speech and Audio Processing, vol. 2, no. 2, pp. 350-352, Apr 1994.
[Kataoka 1996 ] A. Kataoka, et.al, “An 8-kb/s conjugate structure CELP (CS-CELP) speech coder,” IEEE Transactions on Speech and Audio Processing, 1996.
[7] Töreyin, B. U., Dedeoğlu, Y., & Çetin, A. E. (2005, October). HMM based falling person detection using both audio and video. In International Workshop on Human-Computer Interaction (pp. 211-220). Springer, Berlin, Heidelberg.
[8] A. Enis Çetin, Thomas Pearson, R.A. Sevimli, “System for removing shell pieces from hazelnut kernels using impact vibration analysis,” Computers and Electronics in Agriculture, Nov. 2013
[9] Jabloun, F., Cetin, A. E., & Erzin, E. (1999). Teager energy based feature parameters for speech recognition in car noise. IEEE Signal Processing Letters, 6(10), 259-261.
[10] Fatih Erden and A. Enis Cetin, “Infrared Sensors for Indoor Monitoring,” Chapter 15, pp. 365-380 in Radar for In-Door Monitoring: Detection, Classification, and Assessment, Editor Moeness Amin, CRC Press, 2018,
[11] F. Erden, S. Velipasalar, A. Z. Alkar, A.E. Cetin, "Sensors in assisted living: A survey," IEEE Signal Processing Magazine, Volume: 33 Issue: 2 Pages: 36-44, Mar. 2016
[12] Ahmad, F.; Cetin, A. Enis; Ho, K. C. (Dominic); J. Nelson, “Signal Processing for Assisted Living: Developments and Open Problems,” IEEE Signal Processing Magazine, Volume: 33 Issue: 2 Pages: 25-26 Published: Mar 2016
[13] Yusuf Ozturk and A. Enis Cetin, ``Method and System for Monitoring a Subject in a Sleep or Resting State” May 12, 2018.
[14] Gunay, O., Toreyin, B. U., Kose, K., & Cetin, A. E. (2012). Entropy-functional-based online adaptive decision fusion framework with application to wildfire detection in video. IEEE Transactions on Image Processing, 21(5), 2853-2865.
[15] A. Enis Cetin, Bart Merci, O. Günay, B.U. Töreyin, Steven Verstockt, Methods and Techniques for Fire Detection, Signal, Image and Video Processing Perspectives, 1st Edition, Academic Press Print Book , 250 pages, ISBN :9780128023990, 15 Feb 2016
[16] A. Enis Cetin, B. U. Toreyin, Method, device and system for determining the presence of volatile organic compounds (VOC) in video, US Patent 8432451 ‎Issued 30 Apr 2013.
[17] Pearson, T., Cetin, A. E., Tewfik, A. H., & Gokmen, V. (2007). An overview of signal processing for food inspection [applications corner]. IEEE Signal Processing Magazine, 24(3), 106-109.
[18] M. Shahdloo, E. Ilicak, M. Tofighi, Emine U. Saritas, A. Enis Cetin, Tolga Cukur, “Projection onto Epigraph Sets for Rapid Self-Tuning Compressed Sensing MRI,” accepted for publication IEEE Transactions on Medical Imaging.
[19] Badawi, D., Ozev, S., Christen, J. B., Yang, C., Orailoglu, A., & Çetin, A. E. (2019, May). Detecting Gas Vapor Leaks through Uncalibrated Sensor Based CPS. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 8296-8300). IEEE.
[20] Aslan, S., Güdükbay, U., Töreyin, B. U., & Çetin, A. E. (2019, May). Early Wildfire Smoke Detection Based on Motion-based Geometric Image Transformation and Deep Convolutional Generative Adversarial Networks. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 8315-8319).
[21] Badawi, Diaa, Hongyi Pan, Sinan Cem Cetin, and A. Enis Cetin. "Computationally Efficient Spatio-Temporal Dynamic Texture Recognition for Volatile Organic Compound (VOC) Leakage Detection in Industrial Plants." IEEE Journal of Selected Topics in Signal Processing (2020).
[22] Hanosh, O., Ansari, R., Younis, K., & Cetin, A. E. (2019). Real-time epileptic seizure detection during sleep using passive infrared sensors. IEEE Sensors Journal, 19(15), 6467-6476.
[23] Muneeb, U., Koyuncu, E., Keshtkarjahromd, Y., Seferoglu, H., Erden, M. F., & Cetin, A. E. (2020, May). Robust and Computationally-Efficient Anomaly Detection Using Powers-Of-Two Networks. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 2992-2996). IEEE.
[24] Nasrin, S., Badawi, D., Cetin, A.E., Gomes, W. and Trivedi, A.R., “MF-Net: Compute-In-Memory SRAM for Multibit Precision Inference Using Memory-Immersed Data Conversion and Multiplication-Free Operators. IEEE Transactions on Circuits and Systems I: Regular Papers, 68(5), pp.1966-1978, 2021.
[25] Nabi, V., Ayhan, S., Acaroglu, E., Ahi, M.A., Toreyin, H. and Cetin, A.E., 2021. Can we diagnose disk and facet degeneration in lumbar spine by acoustic analysis of spine sounds?. Signal, Image and Video Processing, 15(3), pp.557-562, 2021.
[26] Pan, Hongyi, Diaa Badawi, Erdem Koyuncu, and A. Enis Cetin. "Robust Principal Component Analysis Using a Novel Kernel Related with the L1-Norm." arXiv preprint arXiv:2105.11634 to appear in EUSIPCO (2021).
[27] Pan, Hongyi, Diaa Badawi, and Ahmet Enis Cetin. "Fast Walsh-Hadamard Transform and Smooth-Thresholding Based Binary Layers in Deep Neural Networks." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp

Intellectual Property

Ahiska, B., Davey, M.K. and Cetin, A.E., Grandeye Ltd, 2011. Automatically expanding the zoom capability of a wide-angle video camera. U.S. Patent 7,990,422.

Cetin, A.E., Davey, M.K., Cuce, H.I., Castellari, A.E. and Mulayim, A., Grandeye Ltd, 2011. Method of compression for wide angle digital video. U.S. Patent 7,894,531.

Cetin, A.E. and Toreyin, B.U., Delacom Detection Systems LLC, 2013. Method, device and system for determining the presence of volatile organic compounds (VOC) in video. U.S. Patent 8,432,451.

Artistic and Professional Performances and Exhibits

I will be teaching ECE 491:
ECE 491 Introduction to Artificial Neural Networks

Description: Credit 3. This course provides an introduction to artificial neural networks and deep learning. Biophysical and mathematical models of neurons. Perceptron and its relation to the LMS algorithm. Parallel Computing and GPUs. Convolution neural networks, recurrent neural networks (LSTM and gated recurrent), and residual networks.
Prerequisites: MATH 310, ECE 310 or equivalent and basic computer programming skills.
Overview:
This is an introductory course on Artificial Neural Networks for senior undergraduate and junior graduate students. Prerequisites are linear algebra, calculus and basic computer programming. There will be 8 labs, 1 Midterm exam and 1 Final exam. Laboratory part of the course will cover practical applications and students will use deep learning libraries such as Keras and Tensorflow. Students will also learn how to train neural networks using GPUs.

Weekly Topics:
1. Introduction
2. Biophysical and Mathematical Models of Neurons
3. Early Artificial Neural Network (ANN) Structures: Perceptrons
4. Relation between perceptrons and adaptive FIR Filters and the Least Means Squares (LMS) algorithm, and gradient descent
5. Learning: Supervised, Unsupervised, Reinforcement Learning
6. Training Single Layer ANNs
7. Parallel computing and Graphics Processing Units (GPU)s
8. Training Multilayer ANNs: Back Propagation (BP), Empirical risk minimization and deep learning, batch normalization
9. Optimization methods for training deep models and regularization
10. Sequence modeling, Recurrent and Recursive Neural Networks (RNNs), Long Short-Term Memory (LSTM) Networks
11. Autoencoders: Denoising, Contractive, Stacked ANNs
12. Convolutional NNs and their deep versions
13. Applications to computer vision and image processing
14. Applications to Speech Recognition and Natural Language Processing

Recommended Textbooks:
• Ian Goodfellow, Y. Bengio, A. Courville, Deep Learning, MIT Press, 2016. http://www.deeplearningbook.org/
This is an online textbook by the people who are pioneers of deep learning and generative modeling.
● Daniel Graupe, Deep Learning Neural Networks: Design and Case Studies, World Scientific Press, 2016.
• J. M. Zurada, Introduction to Artificial Neural Systems, West Pub. Co.,
S. Paul, 1992.
• S. Haykin, Neural Networks, A Comprehensive Foundation, 2nd ed.
Macmillan, New York, 1999.