0% found this document useful (0 votes)
63 views

MMS - Unit3-Part-1

Multimedia notes

Uploaded by

kskchari
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
63 views

MMS - Unit3-Part-1

Multimedia notes

Uploaded by

kskchari
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 45
Multimedia Systems UNIT-III pata Compression Standards & Storage * __Coding Techniques — ce coding e coding takes into account the semantics and the characteristics of the data. Thus the degree of compression that can be achieved depends on the data contents. Source coding is 4 lossy coding process in which there is some loss of information content. For example, in case of speech the speech is transformed from the time domain to frequency domain. In the psychoacoustic the encoder analyses the incoming audio signals to identify perceptually important information by incorporating several psychoacoustic principles of the human ear. Entropy Coding Entropy coding is used regardless of the media’s specific characteristics. The data stream to be compressed is considered to be a simple digital sequence and the semantics of the data is ignored. It is concerned solely with how the information is represented. Run Length Coding Typical applications of this type of encoding are when the Source information comprises long substrings of the same suracter or binary digit. Instead of transmitting the source ane in the form of independent code-words or bits, it is indica in the form of a different set of code-words which Peecmees Rot only the particular character or bit being - 7 Page No 63, sour sourc Multimedia Systems Ss transmitted but also indication of the number of charac, ts in the substring . example, ifthe string is AAAAABBB ITITI MMMMMinin, , : Jed as A'SBBBTI6M!8 (In this case there is M0 poiny haracters that repeats itself less than 4 times). © encoding is a variation of run-length encoding basen bination of two data bytes. This techniq,,. the most frequently occurring pairs of bytes, ple, in English language “E”, “T”, “TH”, “AM, o<> “and “HE” occurs most frequently. Huffman Coding 8" Cocing is an example of variable length coding, It esec in the concept that the Probability of occurrence of th- “erecters Is not same so different number of bits is assignec ' different character. Basically in variable length coding the cheacters that occur most frequently are assigned fewe- numbers of bits Arithmetic Coding /“iine Huffman coding which used a separate code word for fer" heracter, arithmetic coding yields a single code wore for each encoded string of characters, "he first step is to divide the numeric range from 0 to 1 into ¢ mber of different characters present in the message to be ‘IrCluding the termination character and the size of each cegment by the probability of the related character. Hybrid Coding type of coding mechanism involve the combine use of t srce coding and the entropy coding for enhancing Page No 64 Multimedia Systems the compression ratio still preserving the quality of information content. The example of Hybrid Cocing includes MPEG, JPEG, H.261, DVI techniques. —_ _Steps in Data Compression A Preparation preparation involves analog to digital conversion of the picture where the image is divided into blocks of 4 x 4 or 8 x 8 pixels. 2. Processing This involves the conversion of the information from the time domain to the frequency domain by using DCT. 3. Quantization It defines discrete level or values that the information is allowed to take. This process involves the reduction of precision. The quantization process may be uniform or it may be differential depending upon the characteristics of the picture. Uncompressed Picture Picture E Picture | Quantiat — Preparation! | Processing Coding Forms Picture Major steps of data compression 4. Entropy Encoding res the lossless compression method where the semantics may es eee but only characteristics are considered. It length coding or entropy coding. Atte . . specemPression, the compressed video stream contains the Cification of the image starting point and an identification Page No 65 Multimedia Systems of the compression technique may be the part of the day stream. The error correction code may also be added to t ‘a stream Decompression is the inverse process of compression. JPEG standards and requirement JPEG (Joint Photographic Experts Group) The JPEG standard for compressing continuous —tone Still pictures (e.g. photographs) was developed by Photographic experts working under the joint auspices of ITU, ISO and TEC, JPEG is significant in compression because MPEG or the standard for motion picture compression is just the JPEG encoding applied to each frame separately. There are some requirements of JPEG standard and they are: >» The JPEG implementation should be independent of image size. > The JPEG implementation should be applicable to any image and pixel aspect ratio. > Color representation itself should be independent of the special implementation. » Image content may be of any complexity, with any statistical characteristics. >» The JPEG standard specification should be state of art (or near) regarding the compression factor and achieved image quality. > Processing complexity must permit a software solution to run on as many available standard processors as possible: Additionally, the use of specialization hardware should substantially enhance image quality. Page No 66 Multimedia Systems JPEG compression steps in JPEG Compression step 1: (Block Preparation) This step involves the block preparation. For e.g. let us assume the input to be 640*480 RGB image with 24 bits/pixel. The luminance and chrominance component of the image is calculated using the YIQ model for NTSC system. y=0.30R + 0.59G + 0.11B i= 0.60R — 0.28G — 0.32B Q= 0.21R - 0.52G + 0.31B For PAL system YUV model is used. Separate matrices are constructed for Y, I and Q each elements in the range of 0 and 255. The square blocks of four pixels are averaged in the land Q matrices to reduce them to 320x240. Thus the data is compressed by a factor of two. Now, 128 is subtracted from each element of all three matrices to put 0 in the middle of the range. Each image is divided up into 8 x 8blocks. The Y matrix has 4800 blocks; the other two have 1200 blocks. _,[" Bick | _ [Discrete Cosine Differential Funlength | }+[svantzson] “| Sara | ‘93 Preparation || trensformation | —*| Wuantzation} O ‘Statistical tose ‘The operation of JPEG in loasysequential mode Step 2: (Discrete Cosine Transformation) Discrete Cosine Transformation is applied to each 7200 blocks cooaately. The output of each DCT is an8*8 matrix of DCT Iclents. DCT element (0,0)is the average value of the Page No 67 Multimedia Systems block. The other element tells how much spectral Poi present at each spatial frequency. Step 3: (Quantization) In this step the less important DCT coefficients are wiped This transformation is done by dividing each of coefficients in the 8x8 DCT matrix by a weight taken fro) table. If all the weights are 1 the transformation does Not! however, if the weights increase sharply from the ori higher spatial frequencies are dropped quickly. Step 4: (Differential Quantization) This step reduces the (0,0) value of each block by replacing it with the amount it differs from the corresponding element in the previous block. Since these elements are the averages of their respective blocks, they should change slowly, so taking the differential values should reduce most of them to small values. The (0,0) values are referred to as the DC components; the other values are the AC components. Step 5: (Run length encoding) This step linearizes the 64 elements and applies mn-length encoding to the list. In order to concentrate zeros together a zigzag scanning pattern is used. Finally run length coding is used to compress the elements. Step 6: (Statistical Encoding) Huffman encodes the numbers for storage or transmission, assigning common numbers shorter codes than uncommon ones. Wer jg Out, the Ma hing igin, JPEG produces a 20:1 or even better compression ratio. Decoding a JPEG image requires running the algorithm i Page No 68 - Multimedia Systems backward and thus it is roughly symmetric: decoding takes a5 long as encoding. Encoder and Decoder diagrams of H.261 H.261 is an algorithm that determines how to encode and compress the data electronically. It is a video coding standard published by the ITU (International Telecommunication Union) in 1990 It is the most widely used international compression technique for encoding videos. H.261 encoding technique can encode only video part of an audiovisual service. H.261 is a two-way communication over ISDN lines (Video conferencing and Video calling) and supports data rate in multiples of 64 KBPS. H.261 defines a video encoder that is intended to be used to compress video data that will be sent over Integrated Services Digital Network (ISDN) lines. The H.261 codec is intended primarily for use in video telephony and videoconferencing applications. H.261 was the first practical digital video coding standard The H.261 design was a pioneering effort, and ail Subsequent international video coding standards (MPEG-1 MPEG-2/H.262, H.263, and even H.264 have been based closely on its design). Additionally, the methods used by the H.261 development committee to collaboratively develop the standard have remained the basic operating Process for subsequent standardization work in the field Multimedia Systems « The images supplied as input to an H.261 Compre: must meet both color space and size (width and eel requirements Sh, In terms of color space, the images must be ¥Cbe images ' In terms of size, the images must adhere to either 4, Common Interchange Format (CIF) or the Quarter-c;, (QCIF) format. Table below indicates the widths ,,, heights defined by these formats. ae : Width | Height | CIFimages | 352 288 L QCIF images | 176 144 H.261 Codec Works The three main elements in an H.261 encoder are as follows: + Prediction « Block Transformation + Quantization and Entropy Coding Two types of image frames are defined: Intra-frames (I-frames) and Inter-frames (P-frames) I-frames are treated as independent images. Transform coding method similar to JPEG is applied within each I- frarne, hence “Intra”. Page No 70 2 Multimedia Systems Intra-frame (I-frame) Coding [7 Fereach For each 1 - frame 1010010 Fig. I-frame Coding. « Macroblocks are of size 16 x 16 pixels for the Y frame, and 8 x 8 for Cb and Cr frames, since 4:2:0 chroma Subsampling is employed. « Amacroblock consists of four Y, one Cb, and one Cr 8 x 8 blocks. « For each 8 x 8 block a DCT transform is applied, the DCT coefficients then go through quantization zigzag scan and entropy coding. Inter-frame (P-frame) Predictive Coding: * P-frame coding scheme is based on motion compensation. In motion compensation a search area _ is constructed/predicted in previous frame to determine the best reference macroblock. * After the prediction, a difference macroblock is derived to Measure the prediction error. Each of these 8x8 blocks go through DCT, quantization, zigzag scan and entropy coding procedures. The P-frame coding encodes the difference macroblock —{not the Target macroblock itself). Page No 71 Multimedia Systems + Sometimes, a good match cannot be found, hen, tp, prediction error exceeds a certain acceptable level wt -1 . is] “me ome macroblock ; Le Metcrence frame Rest match 4 “Af Cle fa ee pct = see nor — 0110010. f © 261 P-trame Coding Based on Motion Compensation [o"] fw | te tase | uc, Mae py (HES, 2 eae le) Hawoder i Page No 72 - Multimedia Systems wel 1 Inpat L. Input Coe Batter Decoded Fame ‘Motion vector () Decoder MPEG standards The MPEG standards are an evolving set of standards for video and audio compression and for multimedia delivery developed by the Moving Picture Experts Group (MPEG). MPEG-1 was designed for coding progressive video at a transmission rate of about 1.5 million bits per second. It was designed specifically for Video-CD and CD-i media. MPEG-1 audio layer-3 (MP3) has also evolved from early MPEG work. MPEG-2 was designed for coding interlaced images at transmission rates above 4 million bits per second. MPEG-2 is Used for digital TV broadcast and DVD. An MPEG-2 player can handle MPEG-1 data as well. MPEG-1 and -2 define techniques for compressing digital Video by factors varying from 25:1 to 50:1. The compression 'S achieved using five different compression techniques: 1. The use of a frequency-based transform called Discrete Cosine Transform (DCT). Page No 73, Multimedia Systems N - Quantization, a technique for losing selective information (sometimes known as lossy compression) that Can be acceptably lost from visual information. 3. Huffman coding, a technique of lossless compression that uses code tables based on statistics about the encodeg data. 4. Motion compensated predictive coding, in which the differences in what has changed between an image and its Preceding image are calculated and only the differences are encoded. 5. Bi-directional Prediction, in which some images are Predicted from the pictures immediately Preceding and following the image. The first three techniques are also used in JPEG file compression. A proposed MPEG-3 standard, intended for High Definition TV (HDTV), was merged with the MPEG-2 standard when it became apparent that the MPEG-2 standard met the HDTV requirements. MPEG-4 is a much more ambitious standard and addresses speech and video synthesis, fractal geometry, computer visualization, and an artificial intelligence (AI) approach to reconstructing images. MPEG-4 addresses a standard way for authors to create and define the media objects in a Page No 74 Multimedia Systerns mpEG-21 provides a larger, architectural framework for the creation and delivery of multimedia. It defines seven key elements: Digital item declaration Digital item identification and declaration . Content handling and usage Intellectual property management and protection Terminals and networks + Content representation Event reporting L MPEG video compression standard The name MPEG is an acronym for Moving Pictures Experts Group. MPEG is a method for video compression, which involves the compression of digital images and sound, as well as synchronization of the two. There currently are several MPEG standards. + MPEG-1 is intended for intermediate data rates, on the order of 1.5 Mbit/sec. + MPEG-2 is intended for high data rates of at least 10 Mbit/sec. * MPEG-3 was intended for HDTV compression but was found to be redundant and was merged with MPEG-2. + MPEG-4 is intended for very low data rates of less than 64 Kbit/sec. * In principle, a motion picture is a rapid flow of a set of frames, where each frame is an image. In other words, a frame is a spatial combination of pixels, and a video is a Page No 75 Multimedia Systems temporal combination of frames that are sent one afte r another. Compressing video, then, means spatially compressin, each frame and temporally compressing a set off Names 9 Spatial Compression: The spatial compression of each frame is done with JPEG (ora modification of it). Each frame is a picture that can be independently compresseg, + Temporal Compression: In temporal compression, redundant frames are removed. ' To temporally compress data, the MPEG method first divides frames into three categories: + I-frames, P-frames, and B-frames. Figurel shows a sample sequence off names. GD | WV Figl: MPEG frames Figure2 shows how I-, P-, and B-frames are constructed ene — Page No 76 , Multimedia Systems I-frames: An intracoded frame (I-frame) is an independent frame that is not related to any other frame. They are present at regular intervals. An I-frame must appear periodically to handle some sudden change in the frame that the previous and following frames cannot show. Also, when a video is broadcast, a viewer may tune at any time. If there is only one I-frame at the beginning of the broadcast, the viewer who tunes in late will not receive a complete picture. I-frames are independent of other frames and cannot be constructed from other frames. P-frames: A predicted frame (P-frame) is related to the preceding I-frame or P-frame. In other words, each P-frame contains only the changes from the preceding frame. The changes, however, cannot cover a big segment. For example, for a fast-moving object, the new changes may not be recorded in a P-frame. P-frames can be constructed only from previous I- or P-frames. P-frames carry much less information than other frame types and carry even fewer bits after compression. B-frames: A bidirectional frame (B-frame) is related to the preceding and following I-frame or P-frame. In other words, each B-frame is relative to the past and the future. Note that a B-frame is never related to another B-frame. * According to the MPEG standard the entire movie is considered as a video sequence which consist of pictures each having three components, one luminance component and two chrominance components (y, u & v). ~ PageNo77 Multimedia Systems The luminance component contains the gray scale & the chrominance components provide the color, saturation Picture hue 3 Each component is a rectangular array of samples @ each row of the array is called the raster line. The eye is more sensitive to spatial variations luminance but less sensitive to similar variations chrominance. Hence MPEG - 1 standard samples th, chrominance components at half the resolution of luminance components. of The input to MPEG encoder is called the resource data and the output of the MPEG decoder is called the reconstructeq data The MPEG decoder has three parts, audio layer, video layer, system layer. The system layer reads and interprets the various headers in the source data and transmits this data to either audio or video layer. The basic building block of an MPEG picture is the macro block as shown: y ra The macro block consist of 16x16 block of luminance gray scale samples divided into four 8x8 blocks of chrominance samples. Page No 78 Multimedia Systems The MPEG compression of a macro block consists of passing each of the °6 blocks their DCT quantization and entropy encoding similar to JPEG. A picture in MPEG is made up of slices where each slice is continuous set of macro blocks having a similar gray scale component. The concept of slice is important when a picture contains uniform areas. « The MPEG standard defines a quantization stage having values (1, 31). Quantization for intra coding is: _ (16 x DCT) + sign(DCT) x quantizer scale) ~ 2 x quantizer - scale x 6 Where DCT = Discrete cosine transform of the coefficienting encoded Q = Quantization coefficient from quantization table +1 DCT >0 Sign (DCT) = 40 DCcT=0 —1 DCT <0 Quantization rule for encoding, 16 x DCT Qver = 2 x quantizer - scale x 0 The quantized numbers Q (DCT )are encoded using non adaptive Haffman method and the standard defines specific Haffman code tables which are calculated by collecting Statistics, Page No 79 Multimedia Systems DVI Technology DVI is 2 technology that includes coding algorithms, 1). fundamental components are a VLSI chip set for the Video subsystem, @ well specified data format for audio and Vide files. an application user interface to the audio-visual kerne) anc compression, as well as decompression, algorithms, For encoding audio standard signal processor is used. Processing of images and video is performed by a video Processor, Aucio and Still Image encoding: Aucio signals are digitized using 16-bits per sample. Audio signals may be PCM-encoded or compressed using the ‘ve differeptial pulse coded modulation (ADPCM) Supported sampling frequencies are: 11025hz, 22050Hz and 44100 Hz for one or two PCM-coded channels. Ane 8268H2,31129Hz, 33075Hz for ADPCM, For Still Images, DVI assumes an internal digital YUV format for image preparation. Any video input signal must first be transformed into this format. The color of €ach pixel is split into luminance component and the two chrominance components (U and Vv), The luminance represents the gray scale image. With RGB, DVI computes the YUV signal using the following relationship. Y=0.30R+0.59G6+0.11B U=B-Y V=R-Y It leads to: Page No 80 Z 7 Multimedia Systems y=-30R-0.59G+0.89B y=0.70R-059G-0.11B DVI Determines the components YUV according to the following: y=0.299R+0.587G+0. 144B+ 16 U=0.577B-0.577Y+ 137.23 V=0.730R-0.730Y+ 139.67 DVI is able to process image in the 16-bit YUV format and the 24-bit YUV format. The 24-bit YUV format uses 8 bits for each component. The 16-bit YUV format coded the Y components of each pixel with 6bits and the color difference components with 5 bits each. There are 2 bitmap formats: Planer and Packed. Planer: All data of the Y component are stored first, followed by the U component values and then all V values. ) Packed: For the packed bitmap format, the Y, U, and V irformation of each pixel is stored together by the data of the next pixel, ____ Optical Storage. Optical storage is also known as “Optical Media” or “Optical Memory” or “Optical Medium”, and it allows all read and write activities which are performed by laser beam. In Optical Memory, all recording information is stored at an Optical disk. As per the opinions of data scientist that compact space is most useful for huge data storage. Their big ~~ Page No 81 Multimedia Systems 7 advantages are not more costly, light weight, and easy transport because it is removable device unlike hard drive, Use Optical Storage Devices 0 0 In the optical storage devices, all data is saved like as patterns of dots which can be easily read with using of UGH Laser Beam is used like as “Light Source”. The data is read while bouncing laser beam on the surface of storage medium. Laser beam creates the all Dots white reading process, but it is used with high power mode to mark the surface of storage device, and make a dot. This entire process is also called the “Burning” data onto Disc. Advantages « It is capable to store vast amount of data. « Affordable price « It can be recycled (Re-used). « It has ultra data stability. « Countable/uncountable storage units « Best Durability, Transport-ability, and archiving. Disadvantages « Some traditional PCs are not able to read these disks. . It is getting trouble while recycling. a i WORM Technology WORM (Write Once, Read Many) storage had emerged in the late 1980s and was popular with large institutions for the archiving of high volume, sensitive data. When data is written to a WORM drive, physical marks are made on the medié = __ Page No 82 _ Multimedia Systems surface by a low-powered laser and since these marks are permanent, they cannot be erased, Rewritable, or erasable, optical disk drives followed, providing the same high capacities as those provided by WORM or CD- ROM devices. However, despite the significant improvernents made by recent optical technologies, performance continued to lag that of hard disk devices, On the plus side optical drives offered several advantages. Their storage rmediurn is rugged, easily transportable and immune from head crashes and the kind of data loss caused by adverse environmental factors. Write Cycle Read Cycle Lave tote The result is that the relative advantages of the The fault-tolerance requirements of multimedia systems are usually less strict than those of real-time systems that have a direct physical impact. * For many multimedia system applications, missing a deadline is not a severe failure, although it should be avoided, * A sequence of digital continuous media data is the result Of periodically sampling a sound or image signal. Page No 97 Multimedia Systems > The bandwidth demand of continuous media is not always, that stringent; it must not be a priori fixed, but it may eventually be lowered. _ - oe Resource Management in Real Time | Time Multimedia Resource Management Current computers do not allow processing of data according to their deadlines without any resource reservation and reaj- time process management Processing in this context refers to any kind of manipulation and communication of data. This stage of development is known as the window of insufficient resource. With CD-DA (Campact Disc Digital Audio) quality, the highest audio requirements are satisfied. In video technology, the required data transfer rate will go up with the development of digital HDTV and larger TV screens. Requirements Interactive : Video Insufficient Sufficient but Resources Scarce Resources High-quality cio Network File Access 1980 1990 2000 Window of insufficient resources. Thus, in an integrated distributed multimedia system, several applications compete for system resources. This shortage of resources requires careful allocation. The system management must employ adequate scheduling algorithms to Page No 98 SS w Multimedia Systems serve the requirements of the applications. Thereby, the resource is first allocated and then managed Resource Management in distributed multimedia systems covers several computers and the involved cornmunication networks. It allocates all resources involved in the data transfer process between sources and sinks. Resources A resource is a system entity required by tasks for manipulating data. Each resource has a set of distinguishing characteristics classified using the following scheme: « Aresource can be active or passive. « An active resource is the CPU or a network adapter for protocol processing; it provides a service. * A passive resource is the main memory, cornmunication bandwidth or a file system .it denotes some system capability required by active resources. « A resource can be either used exclusively by one process at a time or shared between various processes. Active resources are often exclusive passive resources can usually be shared among processes. * Aresource that exists only once in the system is known as @ single, otherwise it is a multiple resource. In a transporter-based multiprocessor system, the individual CPU is a multiple resource. Fach resource has a capacity which results from the ability of 4 certain task to perform using the resource in a given time- Span, Page No 99 N w Multimedia Systems Requirements The requirements of multimedia applications and data streams must be served by the single components of , multimedia system. The resource management maps these requirements onto the respective capacity. The transmission and processing requirements of local and distributeg multimedia applications can be specified according to the following characteristics: The throughput is determined by the needed data rate of 3 connection to satisfy the application requirements. It also depends on the size of the data units. We distinguish between local and global (end-to-end) delay: 2) The delay “at the resource” is the maximum time span for the completion of a certain task at this resource. b) The end-to-end delay is the total delay for a data unit to be transmitted from the source to its destination. For example, the source of a video telephone is the camera, the destination is the video window on the screen of the partner. The jitter (or delay jitter) determines the maximum allowed variance in the arrival of data at the destination. The reliability defines error detection and correction mechanisms used for the transmission and processing of multimedia tasks. Errors can be ignored, indicated and/or corrected, Page No 100 - Multimedia Systems In accordance with communication systems, these requirements are also known as Quality of Service parameters (QoS). Components and Phases one possible realization of resource allocation and management is based on the interaction between clients and their respective resource managers. The client selects the resource and requests a resource allocation by specifying its requirements through a QoS specification. This is equivalent to a workload request. First, the resource manager checks its own resource utilization and decides if the reservation request can be served or not. All existing reservations are stored. This way, their share in terms of the respective resource capacity is guaranteed. Amore elaborate method is to optimize single parameters. In this case, two parameters are determined by the application, and the resource manager calculates the best achievable value for the third parameter (e.g., delay). Server Station User Station [Frame Grabber & | aon | Ease BiWete pees — =e - 4s Compson decorrence rr a Commencates Commune oon Trap & Newan Lyer Newatt Laver =a i Daa tisk — Dea Link * ® Components grouped for the purpose of video data transmission p 8 Page No 12° Multimedia Systems In the case shown in Figure, two computers are connectag aver @ LAN. The transmission of video data between a camer, connected so a computer server and the screen of tp, computer user involves, for all depicted components, a resource menaqger Phases of the Resource Reservation and Management Process A resource manager provides components for the different phases of the allocation and management process: 1. Schedulability Test The resource manager checks with the given QoS parameters (e.g., throvghput and reliability) to determine if there is enough remaining resource capacity available to handles this additional request. 2. Quality of Service Calculation After the scredulability test, the resource manager calculates the best possible performance (e.g., delay) the resource can guarantee for the new request. Resource Reservation The resource manager allocates the required capacity to meet the QoS guarantees for each request. 4. Resource Scheduling Incoming messages from connections are according to the given QoS guarantees. For management, for instance, the allocation of the resource !5 done by the scheduler at the moment the data arrive for scheduled process processing. Page No 102 Multimedia Systems with respect to the last phase, for each resource a scheduling algorithm is defined. The schedulability test, QoS calculation and resource reservation depend on this algorithm used by the scheduler. Allocation Scheme Reservation of resources can be made either in a pessimistic or optimistic way: The pessimistic approach avoids resource conflicts by making reservations for the worst case, i.e., resource bandwidth for the longest processing time and the highest rate which might ever be needed by a task is reserved. Resource conflicts are therefore avoided. This leads potentially to an underutilization of resources. In a multimedia system, the remaining processor time can be used by discrete media tasks. This method results in a guaranteed QoS. With the optimistic approach, resources are reserved according to an average workload only. This means that the CPU is only reserved for the average processing time. This approach may overbook resources with the possibility of unpredictable packet delays. QoS parameters are met as far as possible. Resources are highly utilized, though an overload situation may result in failure. To detect an overload situation and to handle it accordingly a Monitor can be implemented. The monitor may, for instance, Pre-empt processes according to their importance. Pawe No ic? Multimedia Systems Continuous media resource model of real time scheduling Continuous Media Resource Model + This specifies a model frequently adopted to define Qos parameters and hence, the characteristics of the data stream « It is based on the model of Linear Bounded Arrival Processes(I BAP), * In this model a distributedsystem is decomposed into a chain of resources traversed by the messages on their end-to-end path. Examples: CPU, networks. * The data stream consists of LDUs. In this context, i.e. call them messages * Various data streams are independent of each other. s, This variance of the data rate results in an accumulation of messages (burst), where the maximal range is defined by the maximum allowed number of messages. In the LBAP model, a burst of messages consists of messages that arrived ahead of schedule. IBAP is a message arrival process at a resource defined by three parameters: ™M = Maxirnmum message size (byte/message). R= Maximum message rate (message/second). B= Maximum Burstiness (message). Page No 104 ee __ Multimedia Systems Real Time Scheduling System Model Real-time Scheduling: System Model all scheduling algorithms to be introduced are based on the following system model for the scheduling of real-time tasks. Their essential components are the resources tasks and scheduling goals. A task is a schedulable entity of the system, and it corresponds to the notion of a thread in the previous description. In a hard real-time system, a task is characterized by its timing constraints, as well as by its resource requirements. In the considered case, only periodic tasks without precedence constraints are discussed, i.e., the processing of two tasks is mutually independent. For multimedia systems, this can be assumed without any major restriction. Synchronized data, for example, can be processed by a single process. The time constraints of the periodic task 7 are characterized by the following parameters (s, e, d,, P) The time constraints of the 05: Starting point 0 e: Processing time of 7 od: Deadline of 7 p:Period of? Or: Rate of T(r = 1/p,) where by 0d” ed" dd" p). Page No 105 Multimedia Systems <1 ‘ Fig.: Characterization of periodic tasks. The starting point s is the first time when the periodic task requires processing. Afterwards, it requires Processing in every period with a processing time of e. Ats + (k — 1) *p, the task Tis ready for k-processing. The processing of 7 in period k must be finished at s + (k — 1) *p+d. For continuous media tasks, it is assumed that the deadline of the period (k - 1) is the ready time of period k. This is known as congestion avoiding deadlines: The deadline for each message (d) coincides with the period of the respective periodic task (p). Tasks can be pre-emptive or non-pre-emptive. A pre-emptive task can be interrupted by the request of any task with a higher priority. Processing is continu: d in the same state later on. A non-pre-emptive task cannot be interrupted until it voluntarily yields the processor. Any high- priority task Page No 106 Multimedia Systerne must wait until the low-priority task is finished. The high priority task Is then subject to priority inversion In the following, all tasks processed on the CPU are considered as preemptive unless otherwise stated in a real-time system, the scheduling algoritnen must determine a schedule for an exclusive, limited resource that s used by different processes concurrently such that all of thern can be processed without violating any deadlines. This notion can be extended to a model with multiple resources (€.g., CPU) of the sane type. A major performance metric for a real-time scnedu algorithm is the guarantee ratio. The guarantee ratio is the total number of guaranteed tasks versus the number of tasks which could be processed. Another performance metric is the processor utilization. This is the amount of processing time used by guaranteed tasks versus the total amount of processing time. 2 U= ies Earliest Deadline First (EDF) Algorithm Earliest Deadline First (EDF) is an optimal dynamic priority scheduling algorithm mainly used in real-time operating systems. It can be described through the following points: 2) Priority Driven: Each process is assigned a priority and the scheduler leads the process to run according to it Hence, the process with the highest priority is carried out first. In the case of EDF the priority is set according to the absolute deadline of each process. Page No 107

You might also like