0% found this document useful (0 votes)
16 views12 pages

CCD Sensor Based Cameras For Sustainable Streaming

This paper presents a study of compressed sensing techniques applied to CCD and CMOS image sensors for cameras. It proposes a novel compressed sensing method that combines sampling and sparsity-inducing transforms with reconstruction to achieve high-quality images with fewer measurements. Hardware implementation with an FPGA is used to transmit compressed images wirelessly. Results show the proposed system provides power and memory savings compared to uncompressed sensing, especially for low-light CCD images.

Uploaded by

João Brás
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views12 pages

CCD Sensor Based Cameras For Sustainable Streaming

This paper presents a study of compressed sensing techniques applied to CCD and CMOS image sensors for cameras. It proposes a novel compressed sensing method that combines sampling and sparsity-inducing transforms with reconstruction to achieve high-quality images with fewer measurements. Hardware implementation with an FPGA is used to transmit compressed images wirelessly. Results show the proposed system provides power and memory savings compared to uncompressed sensing, especially for low-light CCD images.

Uploaded by

João Brás
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

This article has been accepted for publication in IEEE Access.

This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2017.Doi Number

CCD Sensor Based Cameras for


Sustainable Streaming IoT Applications
with Compressed Sensing
Ramachandra Gambheer1, Member, IEEE, and M.S Bhat2, Senior Member, IEEE
1,2
National Institute of Technology Karnataka, Surathkal, India - 575025

Corresponding author: Ramachandra Gambheer (e-mail: [email protected]).

ABSTRACT This paper presents a comprehensive study of compressed sensing (CS) techniques applied to
Charge Coupled Device (CCD) and Complementary Metal-Oxide Semiconductor (CMOS) sensor-based
cameras. CS is a powerful technique for reducing the number of measurements required to capture high-
quality images while maintaining a high signal-to-noise ratio (SNR). In this study, we propose a novel CS
method for CCD and CMOS sensor-based cameras that combines a new sampling scheme with a sparsity-
inducing transform and a reconstruction algorithm to achieve high-quality images with fewer measurements.
This paper focuses on an efficient CCD image capturing system suitable for embedded IoT applications.
Hardware implementation has been done for proof of concept with an onboard Field Programmable Gate
Array (FPGA) performing the compression. This hardware module is used over a wireless network to transmit
and receive images under different test conditions with both CMOS and CCD sensors. For each use case,
Peak Signal to Noise Ratio (PSNR), average power, and memory usage are computed under different ambient
lighting conditions from dark to very bright. The results show that, a 640x480 CCD sensor with compressed
sensing with a sparsity of 0.5, provides 13% power saving and 15% memory saving compared to
uncompressed sensing in no-light condition, resulting in 25.76 dB PSNR. Whereas, in no light condition,
CMOS sensor does not capture any image at all. These results shows that the CCD image capturing system
with compressed sensing can be conveniently used for embedded IoT applications. The data recovery from
wireless sensor network is done at a central office where computing time and processing power resources are
not constrained. The weight of the CCD camera is approximately 100 grams with modular build approach.

INDEX TERMS CCD Imager, CMOS Imager, Compressive Sensing, Dynamic Range, Fill-factor, Global
Shutter, IoT, LVDS, PSNR, Quantum Efficiency, Rolling Shutter, Wearable Camera

I. INTRODUCTION characteristics of the CCD cells. However, CCD consumes


Wearable video cameras that can transmit streaming videos more power and hence results in higher heat dissipation and
will be of great aid in the surveillance applications. The shorter battery life, making its use challenging in wearable
concern with wearable cameras is the quality of video, power devices. To transmit a streaming video from a wearable
dissipation and resource requirements such as memory, device that can be reproduced at the receiver with high
bandwidth etc. Video quality depends on whether the sensor fidelity, we either need a higher bandwidth or incorporate
is a Complementary Metal Oxide Semiconductor (CMOS) or video compression techniques before transmission. Need for
Charge Coupled Device (CCD) [1]. Even though CMOS a higher bandwidth is obviously not a choice for any
sensors consume less power and provide high-quality picture sustainable applications. Hence, the only option is to perform
frames, they fail under poor ambient lighting conditions video compression before transmission. Video compression
because a minimum cut-in voltage (and hence a minimum involves image processing using a suitable processor and
light) is required for the CMOS image sensor to function. On memory. This results in a higher power requirement and
the other hand, CCD sensors provide adequate quality hence a need to use a heat sink. These additional components
pictures even in the darkest ambiance where the human eye not only increase the power requirement but also the weight
cannot see anything. This is possible because of the inherent of the wearable device, once again making it unsustainable.

VOLUME XX, 2017 1


This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Ramachandra Gambheer and M.S Bhat: CCD Sensor Based Wearable


Cameras for Sustainable Streaming IoT Applications with Compressed
Sensing

In this paper, we present the results of our research using a Smearing & Blooming More Less
Effect
novel technique that addresses all these issues. We used a Quantum Efficiency High Low
compressive sensing technique that avoids the use of higher
memory and on-board video processing, thereby limiting the From the above list, we will explain a few important
power requirement. We implemented this with a CCD characteristics of both the CCD and CMOS image sensors.
imager to obtain decent quality video frames under low
illumination. Combining these two will make a high-quality A. ACTIVE OR PASSIVE PIXEL SENSOR
wearable Internet of Things (IoT) camera a reality, solving The pixel sensor can be active or passive, depending on the
the power, need for higher memory, quality, and weight construction of the sensor. In a Passive Pixel Sensor (PPS),
issues, thus resulting in a sustainable system. light is converted to voltage by employing photosites that
This study is divided into six sections. In second section, convert photons into voltage. Photosites are tiny light
we presented an overview of CMOS and CCD image sensor collectors. No amplification was performed in the passive
characteristics along with their merits and demerits. In third pixel sensor. In an Active Pixel Sensor (APS) [7], light is
section, we presented the requirements of an image sensor converted to voltage using an active electronic circuitry,
for wearable IoT applications. In fourth and fifth sections, including photodiodes and amplifiers. The CCD image sensor
we presented a compressed sensing and retrieval technique is built using PPS, whereas the CMOS uses both PPS and
that can be used for video frame compression. In sixth APS.
section, we presented the hardware implementation. In
seventh section, we presented the results of the work we B. FILL FACTOR
performed by combining the image compression coupled The percentage of photo-site or pixel devoted to collecting the
with the CCD image sensor. We have carried out light or the percentage of photo-site that is sensitive to light is
experiments with CMOS image sensor as well for referred to as pixel’s Fill Factor (FF). A low FF requires a
comparison purpose. In the concluding section, we presented higher exposure time and bright light to capture an image of
the challenges in image compression in IoT applications as a decent quality. CCD sensors have a 100% FF, whereas CMOS
future research direction. sensors have a poor FF that is much less than that of CCD
sensors. This is because, in a CMOS image sensor, each photo-
II. OVERVIEW OF CCD AND CMOS IMAGERS site includes circuitry for filtering noise and amplifying the
There is an extensive body of literature available on signal; hence, the area that is sensitive to light is significantly
construction techniques for both CCD [2][3][4] and CMOS reduced (typically 60%-75% of CCD). The concept of FF is
[5][6][7][8] sensors for image capture. illustrated in [10] as shown in Fig. 1.
Although the main purpose of CCD technology when
invented in Bell Labs in 1969 was for data storage [9], its
capability to transport electronic charges made it possible to
use it as an image sensor to convert light to analog pixel
information. Table I shows the various characteristics and
parameters of CCD and CMOS image sensors.

TABLE I
IMPORTANT CHARACTERISTICS OF CMOS AND CCD IMAGE SENSORS
Characteristics CCD Image Sensor CMOS Image
Sensor FIGURE 1. Concept of Fill Factor; Courtesy: Silicon Imaging
Output electrical signal Analog Digital [10].
Pixel Sensor Passive Active
(PPS/APS)
Readout Speed Relatively Low Relatively High The FF refers to the percentage of photo-sites that are
Frame Rate Low High sensitive to light. If the circuits cover 25% of each photo site,
Fixed Pattern Noise Low High the sensor is said to have a FF of 75%. The higher the fill
Pixel Size Large Small
Resolution Low - High Low - High factor, the more sensitive the sensor is [10]. Recent studies
Fill Factor High (100%) Moderate (60- [11] have shown that a fill factor of the order of 61% and
75%) slightly above is achievable in CMOS image sensors with
Shutter Type Global Shutter Rolling Shutter
Skew No Yes
Single Photon Avalanche Diode (SPAD) detector arrays.
On-Chip Voltage Multiple Voltages Single Voltage
Requirement C. SHUTTER TYPE
Power Consumption Relatively High (2- Low (few mW) Two types of shutters are used in image sensors: rolling
5W)
Dynamic Range High Moderate shutters and global shutters [12]. Owing to the nature of the
Thermal Noise / Dark Low High construction mechanism, all CCD sensors have a global
Current shutter and all CMOS sensors have rolling shutters.

2 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Ramachandra Gambheer and M.S Bhat: CCD Sensor Based Wearable


Cameras for Sustainable Streaming IoT Applications with Compressed
Sensing

Performance of rolling shutter and global shutter cameras are the QE is measured for various photon wavelengths for a given
explained in [13]. image sensor. The calculation of quantum efficiency is
In the rolling shutter mechanism, the pixels of each row of influenced by various factors, such as temperature, voltage
the frame are exposed one row at a time and then converted bias, and material quality. Therefore, it is essential to perform
into digital signals one at a time. The exposure times for all the the measurements under controlled conditions to obtain
rows are the same. Whereas in global shutter, all the pixels are accurate results. The QE of a CCD sensor can be modeled as
exposed to the light simultaneously and then read sequentially a product of three components: the quantum efficiency of the
one at a time. In global shutter, since all pixels are captured at sensor's photodiode, the reflectance of the sensor's surface,
the same time, the image quality is superior to that of the and the transmission of the sensor's cover glass. The overall
rolling shutter. QE of the sensor is the product of these three components. For
a typical CCD sensor, the QE is highest in the visible range,
with a peak value around 500-600 nm. At longer wavelengths,
D. DYNAMIC RANGE
the QE drops rapidly due to absorption in the silicon substrate
The dynamic range of an image sensor is defined as the ratio
of the sensor. The QE is typically less than 10% beyond 900
of the maximum achievable signal (proportional to light) to the
nm. Fig. 3 shows the QE values captured at various photon
sensor noise. The dynamic range of the image sensors can be wavelengths.
calculated using the photon transfer curve, as detailed in [16]. Studies have shown [18] that the QE of a CCD image sensor
The dynamic range is the range between the readable brightest at a wavelength of 550 nm is 70%, whereas that of a CMOS
and darkest pixel areas that can be captured in a single image. sensor is only 37% at the same wavelength.
CCD image sensors have a better dynamic range than CMOS
image sensors, because there is no minimum light required for Quantum Efficiency at various wavelengths for a CCD Sensor
the CCD to represent the charge. 80

70

E. SMEARING & BLOOMING EFFECT 60


Quantum Efficiency (%)

The phenomenon of corrupting the acquired image owing to 50

the leakage of charge from one pixel to the adjacent pixel is 40

known as the horizontal blooming effect or pixel saturation in 30

image sensors. Both smearing (a type of contrast reduction) 20

and blooming effects are more prominent in CCD than in 10

CMOS technology. These effects are noticeable when a bright 0


0 200 400 600 800 1000 1200 1400
light is captured with CCD image sensor. Fig. 2 shows the Wave Length ƛ (nm)
smearing and blooming effect comparison in both CMOS and
CCD image sensors captured with the same exposure time and FIGURE 3. Quantum Efficiency of CCD Image Sensor at various
ambient lighting conditions. There are several compensation photon wavelengths ( nm)

algorithms proposed [17] for removing smearing effects.


III. IMAGE SENSOR REQUIREMENTS FOR EMBEDDED
IOT APPLICATIONS
In the introduction section, we presented a brief description of
a few issues and challenges associated with using either CCD
or CMOS image sensors for wearable IoT applications. In this
section, we will highlight the sensor requirements for
embedded IoT applications in terms of their technical
characteristics.
One of the main image sensor applications of IoT devices is
wearable cameras for surveillance used by law-and-order
FIGURE 2. Smearing and Blooming Effect in CMOS & CCD.
control officers or night watchman drones. Many times, the
requirement might be to grab a video in very low visibility or
F. QUANTUM EFFICIENCY in absolute dark conditions and transmit it to a remote place.
Quantum Efficiency (QE) is a measure of the efficiency with To capture the picture frame that can be decently reproduced,
which a sensor device converts incident light or photons into the sensor should have the following specifications to make a
electrical charge carriers or electrons. The formula for minimum viable product (MVP).
quantum efficiency is: • High fill factor – so that the image can be captured in
QE = (number of charge carriers generated) / (number of very low visibility or in the absence of light where the
photons incident) x 100% human eye cannot see anything clearly.
• Large pixel size, so that image quality is high even under
For example, if a sensor produces X electrons when it is low ambient lighting conditions.
exposed to X photons, the QE is said to be 100%. Generally,

2 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Ramachandra Gambheer and M.S Bhat: CCD Sensor Based Wearable


Cameras for Sustainable Streaming IoT Applications with Compressed
Sensing

• High dynamic range – so that a decent quality of image


can be captured in low-light conditions.
• Global Shutter – so that fast-moving objects can be
captured without distortion.
• Low power consumption – this results in less heat
dissipation, leading to a longer battery life.
• Low thermal noise, so that finer details in the captured
image (such as license plate number) are undistorted.
• Less Smear and Blooming Effect – images captured in
high-intensity light are not distorted.
• High Quantum Efficiency – so that lossless (high
quality) images are captured.
• Uses as much less on-board memory as possible
FIGURE 4. Implementation Block Schematic for Compressed
Most of the above conditions are satisfied with CCD sensor, Sensing
which has the disadvantage of high-power consumption. If we
choose a CMOS sensor for the sake of low power
consumption, we cannot capture any image in poor visibility
or in the absence of light (dark ambient conditions) as the
CMOS transistor requires a minimum threshold voltage for
conduction. To overcome the challenges of high-power, and
high memory requirements while using CCD, we present a
novel technique of compressed sensing, where the image is
compressed while capturing. The results show that the average FIGURE 5. Compressed Sensing Process Flow
power consumption is relatively low for the CCD sensor based
wearable IoT device with onboard compressed sensing. Generally, the acquired images are sparse in transform
domains such as DCT, DFT, or DWT. To understand
IV. COMPRESSED SENSING compressed sensing, consider a transform domain basis 
Compressed sensing (CS) was first demonstrated by Donoho represented as an n × n matrix such that the image x
in 2006 [19] and Cande`s et al. [20], where a technique of represented as a column vector, n x 1 is sparse in  . This can
simultaneous image sensing and compression was used, which be expressed as shown in (1) [22] [23]:
is also known as compressive sensing. The main idea involved
in CS is to recover a sparse signal by means of nonadaptive 𝑥 =  ; ||𝜃||0 ≤ 𝑘 (1)
linear measurements with convex optimization. This method where  is a n x 1 k-sparse vector; meaning,  has at most k
involves recovering a multidimensional sparse vector with a non-zero elements, where k << n.
dimension-reduction step. In practice, the transformed image of x, denoted by y, is a
A sparse matrix or sparse array is a matrix with a majority column vector m × 1, where m < n is measured. In a linear
of zero elements. Sparsity refers to the nonzero elements of the measurement model represented by a matrix , y is expressed
matrix. The sparse matrix is said to be dense if most of the as in (2).
elements are non-zero. The sparsity of the matrix was obtained
by dividing the number of zero-valued elements by the total 𝑦 =  𝑥 =  =   (2)
number of elements. Compressive sensing is suitable for In compressed sensing computations,  is a random matrix
reducing the bandwidth and power of imaging applications. In of order m × n, known as a mixing matrix. Using this random
the CS technique [21][22][23], while acquiring the image matrix, the transformed image y can be represented using a
itself, only the non-redundant required image is sensed or basis, whose vectors are random linear combinations of the
captured, instead of capturing the entire frame, including original basis . In addition,  is expressed as  = .
redundant information, and then removing the redundancy. It has been shown [21] that for efficient compression to
This technique is presented in [24] with relevant algorithm. occur,  must be incoherent with the basis . This condition
A schematic of the CS implementation is shown in Fig. 4 is satisfied when  is chosen as a random matrix.
and the compression process is shown in Fig. 5. The streaming Interestingly, the sparsity becomes 1 when k equals n. This
video image captured by the CCD image sensor was processed resulted in zero compression. If we chose k << n, this would
frame-by-frame and transmitted over a wireless sensor give reasonably high compression, but owing to limited non-
network. The received frames are reconstructed at the receiver zero samples, the reconstructed image will be very lossy.
end. While computing the CS using equation 2, in the random
𝑚
matrix  of the order m × n, the compression ratio is 𝑛 , where

2 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Ramachandra Gambheer and M.S Bhat: CCD Sensor Based Wearable


Cameras for Sustainable Streaming IoT Applications with Compressed
Sensing

m represents no. of measurements, and n represents no. of MATLAB simulation results [25] for standard images with
pixels. If m < n, the solution is said to be ill-posed, as there are different sparsity values are shown in Fig. 7.
more unknowns than equations.
This problem can be overcome by using the sparsity k of .
In this scenario, the measurement vector y is a linear
combination of k columns of the matrix,  with   0. If we
know beforehand the non-zero k entries of  , then we can form
an m × k system of linear equations to solve for the non-zero
values. In this case, the number of equations m either equals
or exceeds the number of unknowns k in many cases. The
measurement matrix  must satisfy the Restricted Isometry
Property (RIP) which is a necessary and sufficient condition
to ensure that the m × k system is resolvable [22]. RIP is
expressed as shown in (3).
‖∅𝜓−1𝑣‖ 2
(1 − 𝛿) ≤ ≤ (1 + 𝛿) (3)
||𝑣||2

where v is any vector with the same nonzero coefficients as


that of  and 0 <  < 1.
Various optimization solutions can be incorporated to (a)
determine  from known values of y and . We chose the
following optimization method [22][26] as shown in (4).
𝜃̂ = 𝑚𝑖𝑛‖𝛷𝛹 − 𝑦‖22 +  ||𝜃||1 (4)
𝛩

We obtain the optimal solution of equation (4) using l1


minimization as shown in Equation (5), where  is the
maximum permissible error in the l1 minimization step.
𝑚𝑖𝑛‖𝜃‖1 𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜 ‖𝛷𝛹 − 𝑦‖22 ≤ 𝜀 (5)
The original image data, x, can be reconstructed from the
obtained solution, 𝜃̂ , as shown in (6).
𝑥̂ = 𝛹𝜃̂ (6)
We first acquired the image and converted it to a grey scale
before dividing it into 8 × 8 blocks. Subsequently, the DCT
coefficients were computed for each block. The coefficients
(b)
were then normalized using the largest coefficient magnitude.
We then chose a threshold value, 𝛾, where any coefficient
value < 𝛾 was set to zero. The no. of coefficients > 𝛾 are
sampled to have ‘m’ samples. The value of ‘m’ is referred to
as ‘sparsity’ or a sparse value. This process is outlined in the
flowchart shown in Fig. 6.
Read the Image File

Divide into 8x8 Blocks

Compute DCT Coefficient


For each block

Normalized Coefficient =
Coeff / Max.(Coeff)
(c)

FIGURE 7. MATLAB Simulation results with different sparsity


Coeff [Normalized Coeff < Threshold] = 0 values: Figure (a) & (b) are MRI images sampled with sparsity
values of 0.3 and 0.7 respectively. Fig. (c) is camera man image
FIGURE 6. Compressed Sensing Algorithm Flow chart sampled with a sparsity 0.5
m ß sampled non-zero Coeff

2 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Ramachandra Gambheer and M.S Bhat: CCD Sensor Based Wearable


Cameras for Sustainable Streaming IoT Applications with Compressed
Sensing

V. IMAGE RECONSTRUCTION Convergence Check:


In compressed sensing (CS), the goal is to reconstruct an • If the residual norm ||r|| is below a predefined
image from a small number of linear measurements, threshold ε, terminate the iterations and return the
exploiting its sparsity or compressibility in a certain transform final solution estimate x.
domain. There are several familiar reconstruction techniques • Otherwise, continue to the next iteration.
commonly used to recover sparse signals at various sparse
levels. Most used techniques [26] are Basis Pursuit, Matching The algorithm iteratively refines the solution estimate by
Pursuit, Orthogonal Matching Pursuit (OMP) and updating the support set Ω and performing a least-squares
Compressive Sampling Matching Pursuit (CoSaMP) estimation within the support set. The thresholding step
algorithms. CoSaMP is an iterative algorithm that combines enforces sparsity by setting coefficients outside Ω to zero, and
ideas from OMP and Iterative Hard Thresholding (IHT). the solution update step incorporates the least-squares estimate
CoSaMP is known for its ability to handle higher levels of within Ω to improve the solution estimate. The reconstructed
sparsity and noisy measurements. We have used CoSaMP images are shown in Table II through X.
algorithm [27] to reconstruct the sparse signal.
VI. HARDWARE IMPLEMENTATION
Let's consider a compressive sensing problem where we aim In this section, we briefly present an IoT System-On-Module
to reconstruct a sparse signal x from measurements y. The (SoM) built with CCD and CMOS image sensors. We adopted
sensing process can be represented by a measurement matrix a modular hardware design approach such that the same
A. The CoSaMP algorithm involves the following five steps: hardware modules could be used for either of the sensors. A
block diagram of the image acquisition and processing of SoM
Initialization: is shown in Fig. 8. The modular design facilitates easier
• Set the solution vector x as an all-zero vector of debugging of hardware.
appropriate size.
• Compute the initial residual r by subtracting the
measurements y from the product of the sensing
matrix A and the current solution estimate x.

Support Set Selection:


• Compute the correlation vector between the
residual r and the columns of A: z = A^T * r. FIGURE 8. System-on-Module (SoM) Block Diagram for Image
Acquisition and Processing.
• Select the indices of the K largest magnitude
entries in z: Ω = topk_indices(|z|, K).

Least-Squares Estimation:
• Form a submatrix A_Ω by selecting the columns
of A indexed by the elements in Ω.
• Solve the least-squares problem min ||A_Ω * x_Ω
- y||^2, where x_Ω represents the sub-vector of x FIGURE 9. System-on-Module (SoM) Hardware Implementation
corresponding to the indices in Ω. for Image Acquisition and Processing.

• Obtain the least-squares solution x_Ω* within the


support set Ω. As shown in Fig.9, the hardware implementation involves
modular design to make embedded system as compact as
Where, ‘x’ represents the original signal vector, ‘Ω’ refers to possible. It consists of sensor module, compression sensing
the support set, which is a set of indices indicating the selected module (that includes FPGA), an interface module and power
features or columns of the measurement matrix and ‘x_Ω*’ supply module. The image sensor module design is performed
denotes the estimated or optimal solution for the signal values in such a way that it can hold either a CCD or a CMOS imager
within the support set ‘Ω’. with the associated sensor electronics. The overall weight of
the above hardware module is approximately 100 grams
Thresholding and Solution Update: (without any type of casing) which makes it easier to use in
• Update the solution estimate x by setting all embedded IoT applications. Detailed description of the
elements outside Ω to zero and assigning x_Ω* to hardware implementation is beyond the scope of this paper.
the corresponding elements within Ω. The completely assembled system of the module is shown in
• Compute the updated residual r by subtracting the Fig.9. The laboratory test setup is shown in Fig. 10. The tests
measurements y from the product of the sensing were repeated for both the CMOS and CCD image sensors
matrix A and the updated solution estimate x. under the same test conditions for comparison purpose.

2 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Ramachandra Gambheer and M.S Bhat: CCD Sensor Based Wearable


Cameras for Sustainable Streaming IoT Applications with Compressed
Sensing

Sensor resolution: Video Graphic Array (VGA)


(640 × 480);
Exposure time or shutter speed: 10 ms;
Frame rate: 30 fps.

Variable parameters were:


Lighting conditions,
Sparsity (0.2 0.5, 0.7, programmed through FPGA).

Memory and power savings are calculated comparing with


uncompressed data. Before we present the test results and
analysis, to make it clear, when sparsity is 1, we do not discard
any samples from the captured image; hence, compressive
sensing is not applied. As a result, we do not obtain any
FIGURE 10. Laboratory Test & Measurement Setup acquiring savings in power and memory with a sparsity equal to 1. In
images and measuring average power for various test conditions. other words, PSNR is highest when sparsity is 1 (no
compression), and PSNR is lowest when sparsity is close to
VII. TEST RESULTS
0.2 (highest compression). The results for various test
We have summarized the power measurements and memory
conditions are summarized in Table II through X.
computation based on the dynamic FIFO register implemented
in the FPGA for different test conditions using monochrome
TABLE II
CMOS and CCD imagers. Both sensors were set up with the
same exposure time, frame rate, and resolution for an apple- TEST CONDITION 1: AMBIENT LIGHT INTENSITY < 100 LUX; SPARSITY = 0.2
to-apple comparison. The Peak Signal-to-Noise Ratio (PSNR) Original Reproduced PSNR Memory Power
Sensor
is a measure of the degree to which the original image is Image Image (dB) Saving Saving
corrupted in terms of distortion. The PSNR can be
mathematically represented as shown in (7).
𝑃𝑆𝑁𝑅 = 20𝑙𝑜𝑔10 (𝑀𝑎𝑥𝑓 /√𝑀𝑆𝐸) (7) CMOS
0 NA NA
where f is an m × n matrix form of the original image, and No image No image
MSE is the mean square error, which is expressed as shown in captured captured
(8).
1
𝑀𝑆𝐸 = 𝑚𝑛 ∑𝑚−1 𝑛−1
𝑖=0 ∑𝑗=0 ||𝑓(𝑖, 𝑗) − 𝑔(𝑖, 𝑗)||
2
(8) CCD
22.75 21% 14%
where g is the matrix form of the decompressed image, m
represents no. of pixels in a row, i is the row index, n represents
no. of pixels in a column, j is the column index.
The PSNR is computed for each image using the algorithm TABLE III
described in Fig.11. The results for each test case are listed in TEST CONDITION 2: AMBIENT LIGHT INTENSITY < 100 LUX; SPARSITY = 0.5
Table II.
Original Reproduced PSNR Memory Power
Sensor
Image Image (dB) Saving Saving

CMOS
0 NA NA
No image No image
captured captured
FIGURE 11. PSNR Computation Algorithm.

In Table II, original image and reconstructed image have the CCD
25.76 15% 13%
following meanings:
Original Image: At the output of the sensor.
Reconstructed Image: At the IoT receiver in the wireless
sensor network. TABLE IV
Constant parameters were:
TEST CONDITION 3: AMBIENT LIGHT INTENSITY < 100 LUX; SPARSITY = 0.7

2 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Ramachandra Gambheer and M.S Bhat: CCD Sensor Based Wearable


Cameras for Sustainable Streaming IoT Applications with Compressed
Sensing

Original Reproduced PSNR Memory Power


Sensor
Image Image (dB) Saving Saving CMOS
28.86 12% 10%

CMOS
0 NA NA
CCD
No image 32.71 12% 11%
No image
captured captured

TABLE VIII
CCD
30.71 12% 11%
TEST CONDITION 7: AMBIENT LIGHT INTENSITY = 100000 LUX;
SPARSITY = 0.2

Original Reproduced PSNR Memory Power


Sensor
TABLE V Image Image (dB) Saving Saving
TEST CONDITION 4: AMBIENT LIGHT INTENSITY = 1000 LUX;
SPARSITY = 0.2
CMOS
20.27 21% 14%
Original Reproduced PSNR Memory Power
Sensor
Image Image (dB) Saving Saving

CMOS CCD
24.93 14% 13% 20.12 21% 14%

TABLE IX
CCD
25.76 15% 13%
TEST CONDITION 7: AMBIENT LIGHT INTENSITY = 100000 LUX;
SPARSITY = 0.5

Original Reproduced PSNR Memory Power


Sensor
TABLE VI Image Image (dB) Saving Saving
TEST CONDITION 5: AMBIENT LIGHT INTENSITY = 1000 LUX;
SPARSITY = 0.5
CMOS
25.32 15% 14%
Original Reproduced PSNR Memory Power
Sensor
Image Image (dB) Saving Saving

CMOS CCD
20.25 20% 15% 25.42 15% 13%

CCD
25.76 15% 13% TABLE X
TEST CONDITION 7: AMBIENT LIGHT INTENSITY = 100000 LUX;
SPARSITY = 0.7
TABLE VII Original Reproduced PSNR Memory Power
Sensor
TEST CONDITION 6: AMBIENT LIGHT INTENSITY = 1000 LUX; Image Image (dB) Saving Saving
SPARSITY = 0.7

Original Reproduced PSNR Memory Power CMOS


Sensor 32.86 12% 11%
Image Image (dB) Saving Saving

2 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Ramachandra Gambheer and M.S Bhat: CCD Sensor Based Wearable


Cameras for Sustainable Streaming IoT Applications with Compressed
Sensing

TABLE XII

CCD PSNR POWER SAVINGS FOR CCD AND CMOS IMAGE SENSORS AT
34.71 13% 12% DIFFERENT SPARSITY VALUES

Light CMOS CCD CMOS CCD CMOS CCD

(Lux) Power Power Power Power Power Power


Note: All the images in Table II through X are true images
Saving Saving Saving Saving Saving Saving
captured with both CMOS and CCD cameras and condensed
for printing purposes. The original resolution of each image S=0.2 S=0.2 S=0.5 S=0.5 S=0.7 S=0.7
is 640 × 480 pixels. 100 0 14% 0 13% 0 11%

VIII. CONCLUSION 200 0 14% 0 13% 0 11%

400 0 14% 0 13% 0 11%


The PSNR values computed for each test case are summarized
1000 13% 13% 15% 15% 10% 11%
in the Table XI. The PSNR values are plotted for various
lighting conditions and sparsity levels for both CCD and 2000 13.2% 13.2% 15.2% 15% 12% 14%
CMOS sensors is shown in Fig.12. Percentage of power
3000 13.4% 13.4% 15.4% 15% 13% 15%
savings for both CMOS and CCD sensor based embedded
system at different sparsity values and light intensity are listed 5000 13.6% 13.6% 15.5% 15% 14% 16%
in Table XII. Power savings as shown in Table XI for various 10000 13.8% 13.8% 15.6% 15% 15% 16%
lighting conditions and sparsity values are plotted in Fig. 13
for both CCD and CMOS sensor based embedded system. 20000 14% 14% 15.7% 14% 16% 16%

50000 14% 14% 15.8% 15% 17% 16%


TABLE XI
100000 14% 14% 14% 13% 11% 12%
PSNR VALUES COMPUTED FOR EACH TEST CASE
FOR DIFFERENT LIGHTING CONDITIONS
PSNR PSNR PSNR PSNR PSNR PSNR
Light (CMOS) (CCD) (CMOS) (CCD) (CMOS) (CCD)
Intensity Sparsity= Sparsity= Sparsity= Sparsity= Sparsity= Sparsity=
(Lux) 0.2 0.2 0.5 0.5 0.7 0.7 Power Saving with Compressed Sensing
20
100 0 22.75 0 25.76 0 30.71 19
18
17
200 10.23 23.65 11.75 26.89 12.21 31.32 16
15
% of Power Saving

14
400 12.45 24.45 12.76 27.65 12.98 32.41 13
12
11
1000 25.23 25.51 27.23 27.93 32.45 32.71 10
9
8
2000 26.56 26.71 28.76 28.88 33.29 33.86 7
6
5
3000 27.44 27.86 29.23 29.86 34.23 34.98 4
3
2
5000 29.64 29.87 31.56 32.78 35.53 35.89 10 100 1000 10000 100000
Ambient Light in Lux (Logarithmic scale)
10000 30.55 30.86 32.82 33.18 36.39 36.98
CMOS Power S=0.2 CCD Power S=0.2 CMOS Power S=0.5
20000 31.66 31.78 33.59 34.12 37.81 37.99 CCD Power S=0.5 CMOS Power S=0.7 CCD Power S=0.7

50000 43.45 44.57 46.86 48.55 50.32 53.11

100000 55.43 43.48 58.48 47.69 60.47 52.32 FIGURE 13. Power Savings for CMOS and CCD sensors at
different sparsity values

Under low ambient light conditions, the CMOS sensor


cannot be used, whereas the CCD sensor outperforms it, and
the reproduced image is clearly readable. This is exhibited in
Table II, III and IV. The resulting plot shown in Fig.12 also
illustrates this phenomenon where PSNR drops for CMOS
image sensor when the light intensity falls < 500 Lux. For the
same light intensity levels of < 500 Lux, the PSNR values
obtained for CCD sensor are considerably high. This is the
biggest advantage of using a CCD sensor for low-or no-light
conditions, as the CCD imager develops a charge even in the
FIGURE 12. FIGURE 1 2 PSNR Values at various lighting absence of light, which is necessary for wearable IoT devices.
conditions for CMOS and CCD sensors
Power and memory savings are functions of the sparsity for a
given sensor resolution. The savings shown in our test results
were for the VGA resolution. For higher resolution image

2 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Ramachandra Gambheer and M.S Bhat: CCD Sensor Based Wearable


Cameras for Sustainable Streaming IoT Applications with Compressed
Sensing

sensors, the savings are significantly high. With the adoption [2]. M. G. Farrier and R. H. Dyck, "A Large Area TDI
of compressed sensing onboard the CCD image sensor, the Image Sensor for Low Light Level Imaging," IEEE
memory and power requirements are reduced. Hence, the Journal of Solid-State Circuits, vol. 15, no. 4, pp.
CCD image sensor with compressed sensing can be used for 753-758, Aug. 1980, doi:
wearable IoT applications, which makes it sustainable even in 10.1109/JSSC.1980.1051465.
low light conditions. The image recovery system can be on a [3]. L. Zhiyong, Y. Weihua and D. Xiance, “The Analog
cloud or in a data center with no resource limitations. Using Front End of Ultra-High Resolution CCD Design
CCD sensors for drone applications [28] has definite Based on AD9920A," 8th International Conference
advantage. on Intelligent Computation Technology and
Automation (ICICTA), Nanchang, China, 2015 pp.
921-924 doi: 10.1109/ICICTA.2015.234
From Table XII and Fig.13, we can see that the power
[4]. Yeong-Wen Daih, et al., "Color CCD image sensor,"
saving is function of sparsity values and does not change with
International Symposium on VLSI Technology,
ambient lighting conditions. With the combination of CCD
Systems, and Applications, Taipei, Taiwan, 1991 pp.
sensor using compressed sensing (with sparsity of 0.5), we can 126-130. doi: 10.1109/VTSA.1991.246696
achieve reasonably decent image capturing system that can be [5]. J. Guo and S. Sonkusale, "A High Dynamic Range
used in embedded IoT applications. The memory saving too is CMOS Image Sensor for Scientific Imaging
a function of sparsity level and image sensor resolution. More Applications," IEEE Sensors Journal, vol. 9, no. 10,
the resolution of the image sensor, more the memory saving pp. 1209-1218, Oct. 2009, doi:
would be at sparsity levels of 0.5. The percentage of memory 10.1109/JSEN.2009.2029814.
savings are shown in Table II to Table X for all the test [6]. E. R. Fossum, "CMOS image sensors: electronic
conditions. camera-on-a-chip," IEEE Transactions on Electron
The entire system can be made even more sustainable by Devices, vol. 44, no. 10, pp. 1689-1698, Oct. 1997,
using both CMOS and CCD sensors in the modular design doi: 10.1109/16.628824.
shown in Fig.8 and Fig.9 for night watchman drone [7]. S. K. Mendis et al., "CMOS active pixel image
applications. Both the sensors can be connected to a single sensors for highly integrated imaging systems," IEEE
backend module through a multiplexer. At any given point in Journal of Solid-State Circuits, vol. 32, no. 2, pp.
time, only one, CMOS or CCD sensor module be connected 187-197, Feb. 1997, doi: 10.1109/4.551910.
to the backend system. The selected multiplexer signal is [8]. Jun Ohta, “Fundamentals of CMOS image sensors”
controlled by a light sensor. Under adequate ambient lighting in Smart CMOS image sensors and applications, 2nd
conditions, a CMOS sensor can be chosen, and during the ed., CRC press, NY, USA, 2020, ch.2, 4, pp. 11-57,
dark, a CCD sensor can be chosen. The energy models for 107-142.
sensor nodes in wireless sensor network with compressed [9]. Edmundoptics.com, ‘Understanding Camera Sensors
sensing is proposed in [29]. for Machine Vision Applications’, 2018. [Online].
Available:
The method proposed in this paper can achieve high-quality
https://www.edmundoptics.com/knowledge-
images with significant reductions in the number of
center/application-notes/imaging/understanding-
measurements required, making it a promising technique for
camera-sensors-for-machine-vision-applications/ .
CCD sensor-based cameras. The simulation and experiment [Accessed: 20-Aug-2021]
results demonstrate the effectiveness of the proposed method [10]. Siliconimaging.com, ‘CMOS Fundamentals’, 2015.
in reconstructing images with a high SNR and without visible [Online]. Available at:
artifacts. Future work can investigate the use of different http://www.siliconimaging.com/cmos_fundamentals.
sparsifying transforms, measurement matrices, and htm. [Accessed:20-Aug-2021]
reconstruction algorithms to further improve the performance [11]. I. Gyongy et al., "256×256, 100kfps, 61% Fill-factor
of the proposed method. Additionally, the proposed method time-resolved SPAD image sensor for microscopy
can be extended to color images by compressing and applications," IEEE International Electron Devices
reconstructing each color channel separately. Overall, the Meeting (IEDM), 2016, pp. 8.2.1-8.2.4, doi:
proposed method has the potential to improve the performance 10.1109/IEDM.2016.7838373.
of CCD sensor-based cameras in various imaging [12]. Daniel Durini, “Charge-coupled device (CCD) image
applications, where CMOS sensors cannot capture the images. sensors” in High Performance Silicon Imaging:
Fundamentals and Applications of CMOS and CCD
REFERENCES Image Sensors, 2nd ed., Woodhead Pub., Cambridge,
MA, USA, 2019, ch.2, pp. 75-92.
[1]. E. R. Fossum, "CMOS image sensors: Electronic [13]. T. Le, N. Le and Y. M. Jang, "Performance of rolling
camera-on-a-chip", IEEE Trans. Electron Devices, shutter and global shutter camera in optical camera
vol. 44, no. 10, pp. 1689-1698, Oct. 1997, doi: communications," in International Conference on
10.1109/16.628824. Information and Communication Technology

2 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Ramachandra Gambheer and M.S Bhat: CCD Sensor Based Wearable


Cameras for Sustainable Streaming IoT Applications with Compressed
Sensing

Convergence (ICTC), Jeju Island, Korea, 2015, pp. Applications," IEEE Distributed Computing, VLSI,
124-128, doi: 10.1109/ICTC.2015.7354509. Electrical Circuits and Robotics (DISCOVER),
[14]. O. Saurer, K. Köser, J. Bouguet and M. Pollefeys, Mangalore, India, Aug. 2018, pp. 131-134, doi:
"Rolling Shutter Stereo," in IEEE International 10.1109/DISCOVER.2018.8674107.
Conference on Computer Vision, Sydney, Australia, [26]. S. Çelik, M. Başaran, S. Erküçük and H. A. Çırpan,
2013, pp. 465-472, doi: 10.1109/ICCV.2013.64. "Comparison of compressed sensing based
[15]. B. Fan, K. Wang, Y. Dai and M. He, "RS-DPSNet: algorithms for sparse signal reconstruction," 2016
Deep Plane Sweep Network for Rolling Shutter 24th Signal Processing and Communication
Stereo Images," IEEE Signal Processing Letters, vol. Application Conference (SIU), Zonguldak, Turkey,
28, pp. 1550-1554, 2021, doi: 2016, pp. 1441-1444, doi:
10.1109/LSP.2021.3099350. 10.1109/SIU.2016.7496021.
[16]. D. Levski, M. Wäny and B. Choubey, "Compensation [27]. X. Zhang, W. Xu, Y. Cui, L. Lu and J. Lin, "On
of Signal-Dependent Readout Noise in Photon Recovery of Block Sparse Signals via Block
Transfer Curve Characterisation of CMOS Image Compressive Sampling Matching Pursuit," in IEEE
Sensors," IEEE Transactions on Circuits and Systems Access, vol. 7, pp. 175554-175563, 2019, doi:
II: Express Briefs, vol. 68, no. 1, pp. 102-105, Jan. 10.1109/ACCESS.2019.2955759.
2021, doi: 10.1109/TCSII.2020.3010366. [28]. T.Yuske and Z. Mbaitiga, "Development of Drone
[17]. Y. S. Han, E. Choi and M. G. Kang, "Smear removal Detecting Free Parking Space for Car Parking
algorithm using the optical black region for CCD Guidance," International Conference on Intelligent
imaging sensors," IEEE Transactions on Consumer Informatics and Biomedical Sciences (ICIIBMS),
Electronics, vol. 55, no. 4, pp. 2287-2293, November Shanghai, China, Nov. 2019, pp. 385-387, doi:
2009, doi: 10.1109/TCE.2009.5373800. 10.1109 / ICIIBMS46890. 2019. 8991452.
[18]. B. Kozacek, J. Grauzel and M. Frivaldsky, "The main [29]. P. Wei and F. He, "The Compressed Sensing of
capabilities and solutions for different types of the Wireless Sensor Networks Based on Internet of
image sensors," in 12th International Conference Things," IEEE Sensors Journal, vol. 21, no. 22, pp.
ELEKTRO, Mikulov, Czech Republic, May. 2018, 25267-25273, Nov.15, 2021, doi:
pp. 1-5, doi: 10.1109/ELEKTRO.2018.8398278. 10.1109/JSEN.2021.3071151.
[19]. D. L. Donoho, “Compressed sensing”, IEEE
Transactions on Information Theory, vol. 52, no. 4,
pp. 1289-1306, Apr. 2006, doi:
10.1109/TIT.2006.871582.
[20]. E. J. Candes, J. Romberg, and T. Tao, “Robust
uncertainty principles: Exact signal reconstruction
from highly incomplete frequency information”, Ramachandra Gambheer (M’15) received his
Bachelor of Engineering (B.E) from Gulbarga
IEEE Transactions on Information Theory, vol. 52, University and Master of Technology (M.Tech)
no. 2, pp. 489-509, Feb. 2006, doi: from NITK, Surathkal in 1990 and 1998
10.1109/TIT.2005.862083. respectively. He started his career as a faculty
[21]. E. J. Candes and M. B. Wakin, “An introduction to member in the Department of Electronics &
Communication Engineering, NITK, Surathkal
compressive sampling”, IEEE Signal Processing where he served from 1991 to 2000. He joined
Magazine, Vol. 25, no.2, pp. 21-30, Mar. 2008, doi: industry in the year 2000 in Semiconductor
10.1109/MSP.2007.914731. Technologies & Instruments (a TI spun off company) where he designed
[22]. Baraniuk, R.G. “Compressive sensing [lecture several CCD and CMOS cameras for Machine Vision applications. He
designed an FPGA based 17mm x 17mm CMOS (using new high speed
notes]”, IEEE Signal Processing Magazine, Vol. 24, CMOS sensor designed by then Photobit, Pasadena, United States)
no.4, pp. 118-124, Jul. 2007, doi: embedded camera with a built-in image processing application for chip
10.1109/MSP.2007.4286571. capacitor inspection which was the smallest camera fitted into a machine
[23]. J. Rombers, “Imaging via compressive sampling”, vision system. He worked with various image sensors including Micron,
Phtobit, Kodak, Aptina, Fill Factory etc. In 2005, he joined Intellitech India
IEEE Signal Processing Magazine, Vol. 25, no.2, pp. as Director, Engineering where he designed several JTAG based functional
14-20, Mar. 2008, doi: 10.1109/MSP.2007.914729. testers for various contractor manufacturers across the World. He held a
[24]. Y. Gui, H. Lu, X. Jiang, F. Wu and C. W. Chen, startup company for three years from 2010 to 2013 where he designed
"Compressed Pseudo-Analog Transmission System several machine vision inspection systems and functional testers for various
contractor manufacturers. He joined Cisco Systems in 2013 and currently
for Remote Sensing Images Over Bandwidth- serving as Sr. Technical Operations lead while pursuing part time research
Constrained Wireless Channels," IEEE Transactions at NITK, Surathkal. He co-authored a Textbook, “Design of Secure IoT
on Circuits and Systems for Video Technology, vol. Systems – A Practical Approach Across Industries” published by McGraw
30, no. 9, pp. 3181-3195, Sep. 2020, doi: 10.1109/ Hill, New York. He is a joint recipient of VLSI’99 international award for
the FPGA based design done for implementation of MIL-STD-1553
TCSVT. 2019.2935127. protocols for Indian spacecraft. His research interests include IoT,
[25]. G. Ramachandra and M. S. Bhat, "Compressed Connected cars, Internet of Medical Things and Wearable IoT devices.
Sensing for Energy and Bandwidth Starved IoT

2 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4
This article has been accepted for publication in IEEE Access. This is the author's version which has not been fully edited and
content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2023.3291396

Ramachandra Gambheer and M.S Bhat: CCD Sensor Based Wearable


Cameras for Sustainable Streaming IoT Applications with Compressed
Sensing

M. S. BHAT (M’02–SM’13) received his M.Sc. in


Physics from Mangalore University, M.Tech from
NITK Surthkal and PhD from Indian Institute of
Science, Bangalore in 1987, 1989 and 2007
respectively. He joined the Department of
Electronics and Communication Engineering,
NITK Surathkal as a faculty in 1989 and currently,
he is a Professor in the Department. He was a
visiting researcher under ODA program at UMIST,
Manchester during 1997 and a visiting researcher at
TU Eindhoven under Erasmus Mundus program during 2009. At NITK
Surathkal, he has executed several sponsored projects - World Bank
Sponsored project IMPACT during 2000-2002, SMDP project in VLSI from
Meity, Govt. of India during 2007 – 2015, RF-MEMS projects from DIT
and NPMASS, Govt. of India during 2010 – 2014, Energy harvesting from
passenger seats funded by ANRC Boeing during 2014-2016, Collaborative
research in sensor systems for public utilities during 2016-18 funded by
Royal Academy of Engineering, UK, and several other collaborative
projects. He is a senior member of IEEE, life member of ISSS and ISTE.
His research interests include Low Power Analog & Mixed Signal Design,
Submicron devices and Image & Signal Processing.

2 VOLUME XX, 2017

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4

You might also like