CCD Sensor Based Cameras For Sustainable Streaming
CCD Sensor Based Cameras For Sustainable Streaming
This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.Doi Number
ABSTRACT This paper presents a comprehensive study of compressed sensing (CS) techniques applied to
Charge Coupled Device (CCD) and Complementary Metal-Oxide Semiconductor (CMOS) sensor-based
cameras. CS is a powerful technique for reducing the number of measurements required to capture high-
quality images while maintaining a high signal-to-noise ratio (SNR). In this study, we propose a novel CS
method for CCD and CMOS sensor-based cameras that combines a new sampling scheme with a sparsity-
inducing transform and a reconstruction algorithm to achieve high-quality images with fewer measurements.
This paper focuses on an efficient CCD image capturing system suitable for embedded IoT applications.
Hardware implementation has been done for proof of concept with an onboard Field Programmable Gate
Array (FPGA) performing the compression. This hardware module is used over a wireless network to transmit
and receive images under different test conditions with both CMOS and CCD sensors. For each use case,
Peak Signal to Noise Ratio (PSNR), average power, and memory usage are computed under different ambient
lighting conditions from dark to very bright. The results show that, a 640x480 CCD sensor with compressed
sensing with a sparsity of 0.5, provides 13% power saving and 15% memory saving compared to
uncompressed sensing in no-light condition, resulting in 25.76 dB PSNR. Whereas, in no light condition,
CMOS sensor does not capture any image at all. These results shows that the CCD image capturing system
with compressed sensing can be conveniently used for embedded IoT applications. The data recovery from
wireless sensor network is done at a central office where computing time and processing power resources are
not constrained. The weight of the CCD camera is approximately 100 grams with modular build approach.
INDEX TERMS CCD Imager, CMOS Imager, Compressive Sensing, Dynamic Range, Fill-factor, Global
Shutter, IoT, LVDS, PSNR, Quantum Efficiency, Rolling Shutter, Wearable Camera
In this paper, we present the results of our research using a Smearing & Blooming More Less
Effect
novel technique that addresses all these issues. We used a Quantum Efficiency High Low
compressive sensing technique that avoids the use of higher
memory and on-board video processing, thereby limiting the From the above list, we will explain a few important
power requirement. We implemented this with a CCD characteristics of both the CCD and CMOS image sensors.
imager to obtain decent quality video frames under low
illumination. Combining these two will make a high-quality A. ACTIVE OR PASSIVE PIXEL SENSOR
wearable Internet of Things (IoT) camera a reality, solving The pixel sensor can be active or passive, depending on the
the power, need for higher memory, quality, and weight construction of the sensor. In a Passive Pixel Sensor (PPS),
issues, thus resulting in a sustainable system. light is converted to voltage by employing photosites that
This study is divided into six sections. In second section, convert photons into voltage. Photosites are tiny light
we presented an overview of CMOS and CCD image sensor collectors. No amplification was performed in the passive
characteristics along with their merits and demerits. In third pixel sensor. In an Active Pixel Sensor (APS) [7], light is
section, we presented the requirements of an image sensor converted to voltage using an active electronic circuitry,
for wearable IoT applications. In fourth and fifth sections, including photodiodes and amplifiers. The CCD image sensor
we presented a compressed sensing and retrieval technique is built using PPS, whereas the CMOS uses both PPS and
that can be used for video frame compression. In sixth APS.
section, we presented the hardware implementation. In
seventh section, we presented the results of the work we B. FILL FACTOR
performed by combining the image compression coupled The percentage of photo-site or pixel devoted to collecting the
with the CCD image sensor. We have carried out light or the percentage of photo-site that is sensitive to light is
experiments with CMOS image sensor as well for referred to as pixel’s Fill Factor (FF). A low FF requires a
comparison purpose. In the concluding section, we presented higher exposure time and bright light to capture an image of
the challenges in image compression in IoT applications as a decent quality. CCD sensors have a 100% FF, whereas CMOS
future research direction. sensors have a poor FF that is much less than that of CCD
sensors. This is because, in a CMOS image sensor, each photo-
II. OVERVIEW OF CCD AND CMOS IMAGERS site includes circuitry for filtering noise and amplifying the
There is an extensive body of literature available on signal; hence, the area that is sensitive to light is significantly
construction techniques for both CCD [2][3][4] and CMOS reduced (typically 60%-75% of CCD). The concept of FF is
[5][6][7][8] sensors for image capture. illustrated in [10] as shown in Fig. 1.
Although the main purpose of CCD technology when
invented in Bell Labs in 1969 was for data storage [9], its
capability to transport electronic charges made it possible to
use it as an image sensor to convert light to analog pixel
information. Table I shows the various characteristics and
parameters of CCD and CMOS image sensors.
TABLE I
IMPORTANT CHARACTERISTICS OF CMOS AND CCD IMAGE SENSORS
Characteristics CCD Image Sensor CMOS Image
Sensor FIGURE 1. Concept of Fill Factor; Courtesy: Silicon Imaging
Output electrical signal Analog Digital [10].
Pixel Sensor Passive Active
(PPS/APS)
Readout Speed Relatively Low Relatively High The FF refers to the percentage of photo-sites that are
Frame Rate Low High sensitive to light. If the circuits cover 25% of each photo site,
Fixed Pattern Noise Low High the sensor is said to have a FF of 75%. The higher the fill
Pixel Size Large Small
Resolution Low - High Low - High factor, the more sensitive the sensor is [10]. Recent studies
Fill Factor High (100%) Moderate (60- [11] have shown that a fill factor of the order of 61% and
75%) slightly above is achievable in CMOS image sensors with
Shutter Type Global Shutter Rolling Shutter
Skew No Yes
Single Photon Avalanche Diode (SPAD) detector arrays.
On-Chip Voltage Multiple Voltages Single Voltage
Requirement C. SHUTTER TYPE
Power Consumption Relatively High (2- Low (few mW) Two types of shutters are used in image sensors: rolling
5W)
Dynamic Range High Moderate shutters and global shutters [12]. Owing to the nature of the
Thermal Noise / Dark Low High construction mechanism, all CCD sensors have a global
Current shutter and all CMOS sensors have rolling shutters.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396
Performance of rolling shutter and global shutter cameras are the QE is measured for various photon wavelengths for a given
explained in [13]. image sensor. The calculation of quantum efficiency is
In the rolling shutter mechanism, the pixels of each row of influenced by various factors, such as temperature, voltage
the frame are exposed one row at a time and then converted bias, and material quality. Therefore, it is essential to perform
into digital signals one at a time. The exposure times for all the the measurements under controlled conditions to obtain
rows are the same. Whereas in global shutter, all the pixels are accurate results. The QE of a CCD sensor can be modeled as
exposed to the light simultaneously and then read sequentially a product of three components: the quantum efficiency of the
one at a time. In global shutter, since all pixels are captured at sensor's photodiode, the reflectance of the sensor's surface,
the same time, the image quality is superior to that of the and the transmission of the sensor's cover glass. The overall
rolling shutter. QE of the sensor is the product of these three components. For
a typical CCD sensor, the QE is highest in the visible range,
with a peak value around 500-600 nm. At longer wavelengths,
D. DYNAMIC RANGE
the QE drops rapidly due to absorption in the silicon substrate
The dynamic range of an image sensor is defined as the ratio
of the sensor. The QE is typically less than 10% beyond 900
of the maximum achievable signal (proportional to light) to the
nm. Fig. 3 shows the QE values captured at various photon
sensor noise. The dynamic range of the image sensors can be wavelengths.
calculated using the photon transfer curve, as detailed in [16]. Studies have shown [18] that the QE of a CCD image sensor
The dynamic range is the range between the readable brightest at a wavelength of 550 nm is 70%, whereas that of a CMOS
and darkest pixel areas that can be captured in a single image. sensor is only 37% at the same wavelength.
CCD image sensors have a better dynamic range than CMOS
image sensors, because there is no minimum light required for Quantum Efficiency at various wavelengths for a CCD Sensor
the CCD to represent the charge. 80
70
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396
m represents no. of measurements, and n represents no. of MATLAB simulation results [25] for standard images with
pixels. If m < n, the solution is said to be ill-posed, as there are different sparsity values are shown in Fig. 7.
more unknowns than equations.
This problem can be overcome by using the sparsity k of .
In this scenario, the measurement vector y is a linear
combination of k columns of the matrix, with 0. If we
know beforehand the non-zero k entries of , then we can form
an m × k system of linear equations to solve for the non-zero
values. In this case, the number of equations m either equals
or exceeds the number of unknowns k in many cases. The
measurement matrix must satisfy the Restricted Isometry
Property (RIP) which is a necessary and sufficient condition
to ensure that the m × k system is resolvable [22]. RIP is
expressed as shown in (3).
‖∅𝜓−1𝑣‖ 2
(1 − 𝛿) ≤ ≤ (1 + 𝛿) (3)
||𝑣||2
Normalized Coefficient =
Coeff / Max.(Coeff)
(c)
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396
Least-Squares Estimation:
• Form a submatrix A_Ω by selecting the columns
of A indexed by the elements in Ω.
• Solve the least-squares problem min ||A_Ω * x_Ω
- y||^2, where x_Ω represents the sub-vector of x FIGURE 9. System-on-Module (SoM) Hardware Implementation
corresponding to the indices in Ω. for Image Acquisition and Processing.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396
CMOS
0 NA NA
No image No image
captured captured
FIGURE 11. PSNR Computation Algorithm.
In Table II, original image and reconstructed image have the CCD
25.76 15% 13%
following meanings:
Original Image: At the output of the sensor.
Reconstructed Image: At the IoT receiver in the wireless
sensor network. TABLE IV
Constant parameters were:
TEST CONDITION 3: AMBIENT LIGHT INTENSITY < 100 LUX; SPARSITY = 0.7
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396
CMOS
0 NA NA
CCD
No image 32.71 12% 11%
No image
captured captured
TABLE VIII
CCD
30.71 12% 11%
TEST CONDITION 7: AMBIENT LIGHT INTENSITY = 100000 LUX;
SPARSITY = 0.2
CMOS CCD
24.93 14% 13% 20.12 21% 14%
TABLE IX
CCD
25.76 15% 13%
TEST CONDITION 7: AMBIENT LIGHT INTENSITY = 100000 LUX;
SPARSITY = 0.5
CMOS CCD
20.25 20% 15% 25.42 15% 13%
CCD
25.76 15% 13% TABLE X
TEST CONDITION 7: AMBIENT LIGHT INTENSITY = 100000 LUX;
SPARSITY = 0.7
TABLE VII Original Reproduced PSNR Memory Power
Sensor
TEST CONDITION 6: AMBIENT LIGHT INTENSITY = 1000 LUX; Image Image (dB) Saving Saving
SPARSITY = 0.7
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396
TABLE XII
CCD PSNR POWER SAVINGS FOR CCD AND CMOS IMAGE SENSORS AT
34.71 13% 12% DIFFERENT SPARSITY VALUES
14
400 12.45 24.45 12.76 27.65 12.98 32.41 13
12
11
1000 25.23 25.51 27.23 27.93 32.45 32.71 10
9
8
2000 26.56 26.71 28.76 28.88 33.29 33.86 7
6
5
3000 27.44 27.86 29.23 29.86 34.23 34.98 4
3
2
5000 29.64 29.87 31.56 32.78 35.53 35.89 10 100 1000 10000 100000
Ambient Light in Lux (Logarithmic scale)
10000 30.55 30.86 32.82 33.18 36.39 36.98
CMOS Power S=0.2 CCD Power S=0.2 CMOS Power S=0.5
20000 31.66 31.78 33.59 34.12 37.81 37.99 CCD Power S=0.5 CMOS Power S=0.7 CCD Power S=0.7
100000 55.43 43.48 58.48 47.69 60.47 52.32 FIGURE 13. Power Savings for CMOS and CCD sensors at
different sparsity values
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396
sensors, the savings are significantly high. With the adoption [2]. M. G. Farrier and R. H. Dyck, "A Large Area TDI
of compressed sensing onboard the CCD image sensor, the Image Sensor for Low Light Level Imaging," IEEE
memory and power requirements are reduced. Hence, the Journal of Solid-State Circuits, vol. 15, no. 4, pp.
CCD image sensor with compressed sensing can be used for 753-758, Aug. 1980, doi:
wearable IoT applications, which makes it sustainable even in 10.1109/JSSC.1980.1051465.
low light conditions. The image recovery system can be on a [3]. L. Zhiyong, Y. Weihua and D. Xiance, “The Analog
cloud or in a data center with no resource limitations. Using Front End of Ultra-High Resolution CCD Design
CCD sensors for drone applications [28] has definite Based on AD9920A," 8th International Conference
advantage. on Intelligent Computation Technology and
Automation (ICICTA), Nanchang, China, 2015 pp.
921-924 doi: 10.1109/ICICTA.2015.234
From Table XII and Fig.13, we can see that the power
[4]. Yeong-Wen Daih, et al., "Color CCD image sensor,"
saving is function of sparsity values and does not change with
International Symposium on VLSI Technology,
ambient lighting conditions. With the combination of CCD
Systems, and Applications, Taipei, Taiwan, 1991 pp.
sensor using compressed sensing (with sparsity of 0.5), we can 126-130. doi: 10.1109/VTSA.1991.246696
achieve reasonably decent image capturing system that can be [5]. J. Guo and S. Sonkusale, "A High Dynamic Range
used in embedded IoT applications. The memory saving too is CMOS Image Sensor for Scientific Imaging
a function of sparsity level and image sensor resolution. More Applications," IEEE Sensors Journal, vol. 9, no. 10,
the resolution of the image sensor, more the memory saving pp. 1209-1218, Oct. 2009, doi:
would be at sparsity levels of 0.5. The percentage of memory 10.1109/JSEN.2009.2029814.
savings are shown in Table II to Table X for all the test [6]. E. R. Fossum, "CMOS image sensors: electronic
conditions. camera-on-a-chip," IEEE Transactions on Electron
The entire system can be made even more sustainable by Devices, vol. 44, no. 10, pp. 1689-1698, Oct. 1997,
using both CMOS and CCD sensors in the modular design doi: 10.1109/16.628824.
shown in Fig.8 and Fig.9 for night watchman drone [7]. S. K. Mendis et al., "CMOS active pixel image
applications. Both the sensors can be connected to a single sensors for highly integrated imaging systems," IEEE
backend module through a multiplexer. At any given point in Journal of Solid-State Circuits, vol. 32, no. 2, pp.
time, only one, CMOS or CCD sensor module be connected 187-197, Feb. 1997, doi: 10.1109/4.551910.
to the backend system. The selected multiplexer signal is [8]. Jun Ohta, “Fundamentals of CMOS image sensors”
controlled by a light sensor. Under adequate ambient lighting in Smart CMOS image sensors and applications, 2nd
conditions, a CMOS sensor can be chosen, and during the ed., CRC press, NY, USA, 2020, ch.2, 4, pp. 11-57,
dark, a CCD sensor can be chosen. The energy models for 107-142.
sensor nodes in wireless sensor network with compressed [9]. Edmundoptics.com, ‘Understanding Camera Sensors
sensing is proposed in [29]. for Machine Vision Applications’, 2018. [Online].
Available:
The method proposed in this paper can achieve high-quality
https://www.edmundoptics.com/knowledge-
images with significant reductions in the number of
center/application-notes/imaging/understanding-
measurements required, making it a promising technique for
camera-sensors-for-machine-vision-applications/ .
CCD sensor-based cameras. The simulation and experiment [Accessed: 20-Aug-2021]
results demonstrate the effectiveness of the proposed method [10]. Siliconimaging.com, ‘CMOS Fundamentals’, 2015.
in reconstructing images with a high SNR and without visible [Online]. Available at:
artifacts. Future work can investigate the use of different http://www.siliconimaging.com/cmos_fundamentals.
sparsifying transforms, measurement matrices, and htm. [Accessed:20-Aug-2021]
reconstruction algorithms to further improve the performance [11]. I. Gyongy et al., "256×256, 100kfps, 61% Fill-factor
of the proposed method. Additionally, the proposed method time-resolved SPAD image sensor for microscopy
can be extended to color images by compressing and applications," IEEE International Electron Devices
reconstructing each color channel separately. Overall, the Meeting (IEDM), 2016, pp. 8.2.1-8.2.4, doi:
proposed method has the potential to improve the performance 10.1109/IEDM.2016.7838373.
of CCD sensor-based cameras in various imaging [12]. Daniel Durini, “Charge-coupled device (CCD) image
applications, where CMOS sensors cannot capture the images. sensors” in High Performance Silicon Imaging:
Fundamentals and Applications of CMOS and CCD
REFERENCES Image Sensors, 2nd ed., Woodhead Pub., Cambridge,
MA, USA, 2019, ch.2, pp. 75-92.
[1]. E. R. Fossum, "CMOS image sensors: Electronic [13]. T. Le, N. Le and Y. M. Jang, "Performance of rolling
camera-on-a-chip", IEEE Trans. Electron Devices, shutter and global shutter camera in optical camera
vol. 44, no. 10, pp. 1689-1698, Oct. 1997, doi: communications," in International Conference on
10.1109/16.628824. Information and Communication Technology
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396
Convergence (ICTC), Jeju Island, Korea, 2015, pp. Applications," IEEE Distributed Computing, VLSI,
124-128, doi: 10.1109/ICTC.2015.7354509. Electrical Circuits and Robotics (DISCOVER),
[14]. O. Saurer, K. Köser, J. Bouguet and M. Pollefeys, Mangalore, India, Aug. 2018, pp. 131-134, doi:
"Rolling Shutter Stereo," in IEEE International 10.1109/DISCOVER.2018.8674107.
Conference on Computer Vision, Sydney, Australia, [26]. S. Çelik, M. Başaran, S. Erküçük and H. A. Çırpan,
2013, pp. 465-472, doi: 10.1109/ICCV.2013.64. "Comparison of compressed sensing based
[15]. B. Fan, K. Wang, Y. Dai and M. He, "RS-DPSNet: algorithms for sparse signal reconstruction," 2016
Deep Plane Sweep Network for Rolling Shutter 24th Signal Processing and Communication
Stereo Images," IEEE Signal Processing Letters, vol. Application Conference (SIU), Zonguldak, Turkey,
28, pp. 1550-1554, 2021, doi: 2016, pp. 1441-1444, doi:
10.1109/LSP.2021.3099350. 10.1109/SIU.2016.7496021.
[16]. D. Levski, M. Wäny and B. Choubey, "Compensation [27]. X. Zhang, W. Xu, Y. Cui, L. Lu and J. Lin, "On
of Signal-Dependent Readout Noise in Photon Recovery of Block Sparse Signals via Block
Transfer Curve Characterisation of CMOS Image Compressive Sampling Matching Pursuit," in IEEE
Sensors," IEEE Transactions on Circuits and Systems Access, vol. 7, pp. 175554-175563, 2019, doi:
II: Express Briefs, vol. 68, no. 1, pp. 102-105, Jan. 10.1109/ACCESS.2019.2955759.
2021, doi: 10.1109/TCSII.2020.3010366. [28]. T.Yuske and Z. Mbaitiga, "Development of Drone
[17]. Y. S. Han, E. Choi and M. G. Kang, "Smear removal Detecting Free Parking Space for Car Parking
algorithm using the optical black region for CCD Guidance," International Conference on Intelligent
imaging sensors," IEEE Transactions on Consumer Informatics and Biomedical Sciences (ICIIBMS),
Electronics, vol. 55, no. 4, pp. 2287-2293, November Shanghai, China, Nov. 2019, pp. 385-387, doi:
2009, doi: 10.1109/TCE.2009.5373800. 10.1109 / ICIIBMS46890. 2019. 8991452.
[18]. B. Kozacek, J. Grauzel and M. Frivaldsky, "The main [29]. P. Wei and F. He, "The Compressed Sensing of
capabilities and solutions for different types of the Wireless Sensor Networks Based on Internet of
image sensors," in 12th International Conference Things," IEEE Sensors Journal, vol. 21, no. 22, pp.
ELEKTRO, Mikulov, Czech Republic, May. 2018, 25267-25273, Nov.15, 2021, doi:
pp. 1-5, doi: 10.1109/ELEKTRO.2018.8398278. 10.1109/JSEN.2021.3071151.
[19]. D. L. Donoho, “Compressed sensing”, IEEE
Transactions on Information Theory, vol. 52, no. 4,
pp. 1289-1306, Apr. 2006, doi:
10.1109/TIT.2006.871582.
[20]. E. J. Candes, J. Romberg, and T. Tao, “Robust
uncertainty principles: Exact signal reconstruction
from highly incomplete frequency information”, Ramachandra Gambheer (M’15) received his
Bachelor of Engineering (B.E) from Gulbarga
IEEE Transactions on Information Theory, vol. 52, University and Master of Technology (M.Tech)
no. 2, pp. 489-509, Feb. 2006, doi: from NITK, Surathkal in 1990 and 1998
10.1109/TIT.2005.862083. respectively. He started his career as a faculty
[21]. E. J. Candes and M. B. Wakin, “An introduction to member in the Department of Electronics &
Communication Engineering, NITK, Surathkal
compressive sampling”, IEEE Signal Processing where he served from 1991 to 2000. He joined
Magazine, Vol. 25, no.2, pp. 21-30, Mar. 2008, doi: industry in the year 2000 in Semiconductor
10.1109/MSP.2007.914731. Technologies & Instruments (a TI spun off company) where he designed
[22]. Baraniuk, R.G. “Compressive sensing [lecture several CCD and CMOS cameras for Machine Vision applications. He
designed an FPGA based 17mm x 17mm CMOS (using new high speed
notes]”, IEEE Signal Processing Magazine, Vol. 24, CMOS sensor designed by then Photobit, Pasadena, United States)
no.4, pp. 118-124, Jul. 2007, doi: embedded camera with a built-in image processing application for chip
10.1109/MSP.2007.4286571. capacitor inspection which was the smallest camera fitted into a machine
[23]. J. Rombers, “Imaging via compressive sampling”, vision system. He worked with various image sensors including Micron,
Phtobit, Kodak, Aptina, Fill Factory etc. In 2005, he joined Intellitech India
IEEE Signal Processing Magazine, Vol. 25, no.2, pp. as Director, Engineering where he designed several JTAG based functional
14-20, Mar. 2008, doi: 10.1109/MSP.2007.914729. testers for various contractor manufacturers across the World. He held a
[24]. Y. Gui, H. Lu, X. Jiang, F. Wu and C. W. Chen, startup company for three years from 2010 to 2013 where he designed
"Compressed Pseudo-Analog Transmission System several machine vision inspection systems and functional testers for various
contractor manufacturers. He joined Cisco Systems in 2013 and currently
for Remote Sensing Images Over Bandwidth- serving as Sr. Technical Operations lead while pursuing part time research
Constrained Wireless Channels," IEEE Transactions at NITK, Surathkal. He co-authored a Textbook, “Design of Secure IoT
on Circuits and Systems for Video Technology, vol. Systems – A Practical Approach Across Industries” published by McGraw
30, no. 9, pp. 3181-3195, Sep. 2020, doi: 10.1109/ Hill, New York. He is a joint recipient of VLSI’99 international award for
the FPGA based design done for implementation of MIL-STD-1553
TCSVT. 2019.2935127. protocols for Indian spacecraft. His research interests include IoT,
[25]. G. Ramachandra and M. S. Bhat, "Compressed Connected cars, Internet of Medical Things and Wearable IoT devices.
Sensing for Energy and Bandwidth Starved IoT
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4