WO2004045217A1 - Transmission system with colour depth scalability - Google Patents

Transmission system with colour depth scalability Download PDF

Info

Publication number
WO2004045217A1
WO2004045217A1 PCT/IB2003/004995 IB0304995W WO2004045217A1 WO 2004045217 A1 WO2004045217 A1 WO 2004045217A1 IB 0304995 W IB0304995 W IB 0304995W WO 2004045217 A1 WO2004045217 A1 WO 2004045217A1
Authority
WO
WIPO (PCT)
Prior art keywords
enhancement
basic
bits
component video
video signal
Prior art date
Application number
PCT/IB2003/004995
Other languages
French (fr)
Inventor
Paul Bazzaz
Jean-Pierre Weber
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to AU2003278445A priority Critical patent/AU2003278445A1/en
Publication of WO2004045217A1 publication Critical patent/WO2004045217A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/34Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234327Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • H04N21/25825Management of client data involving client display capabilities, e.g. screen resolution of a mobile phone

Definitions

  • the invention relates, in general, to a system and a method for video encoding, transmission and display, with video scalability. It also relates to a storage unit storing a video content.
  • a transmission method comprises the steps of: - getting a basic encoded stream and optionally, depending on a target display colour depth, one or more enhancement encoded streams, said basic and enhancement encoded streams being derived from an original multi-component-video signal in which each component corresponds to a sequence of pixel information quantified on a certain number of bits including a group of basic bits and one or more groups of enhancement bits, said basic encoded stream resulting from an encoding of a basic multi-component video signal comprising said basic bits, and said enhancement streams resulting from the encoding of one or more enhancement multi-component video signals comprising said enhancement bits,
  • the colour depth of a display is defined as the number of bits used by the display unit to restore a pixel.
  • the pixel information contained in the multi- component video signals is quantified on 24 bits (8 bits for each of the three components Y, U and V). This corresponds to the colour depth of the standard television display unit.
  • other types of display units like wireless mobile display units
  • display units may have smaller colour depths.
  • display units may have a 12-bit colour depth (4 bits for each of the three components Y, U and V) or a 16-bit colour depth (6 bits for the luminance Y, and 5 bits for each of the chrominance components U and V).
  • Transmitting information that is not used by the receiver is a waste of the transmission capacity. This is especially critical in wireless environments.
  • the transmission method of the invention allows achieving colour depth scalability.
  • the transmitted video is adapted to a target display colour depth. If the display unit of the receiver has a small colour depth, only the basic encoded stream is transmitted. For receivers whose display unit has a higher colour depth, one or more enhancement encoded streams are transmitted with the basic encoded stream.
  • the target display colour depth is set by the video transmission application. When the video is transmitted on request of the receiver, the target display colour depth is advantageously transmitted by the receiver in the request.
  • Colour depth scalability is also achieved with a transmission system as defined in claims 1 and 2, a video encoding device as defined in claim 4, a video generation method as defined in claim 5, a video recovery method as defined in claim 8, a terminal as defined in claim 7, and a storage unit as defined in claim 9.
  • FIG. 1 is a block diagram of an example of a transmission system according to the invention
  • FIG. 2 is a block diagram of a video encoding device according to the invention.
  • - figure 3 is a block diagram of a terminal according to the invention
  • - figure 4 is a diagram showing the steps of a video generation method, a transmission method and a video recovery method according to the invention.
  • a video transmission system 1 comprises a transmitter 2, a receiving terminal 3 having a display, and a transmission channel 5.
  • the transmitter 2 comprises a transmission/reception block 6a and it is hosting a video transmission application 6b.
  • the video transmission application 6b may be a video on demand service intended to transmit a video requested by a user. It may also be a conference broadcast service intended to transmit video to people who are attending a conference. It may also be an application run by a home server intended to transmit streams towards terminals or displays in the house (the transmitted streams may be recovered by the home server from the Internet or can be read from a storage unit, for instance, a disc).
  • the video transmission application 6b has access to a content 7 generated by a video encoding device 8.
  • the content 7 can be generated on the fly or it can be stored in a storage unit 9 and recovered from the storage unit 9.
  • the video encoding device 8 and the storage unit 9 may be included in the transmitter 2 but this is not mandatory.
  • the video encoding device 8 and the storage unit 9 may be located remote from the transmitter 2. For instance, the transmitter 2 may access the content 7 via the Internet network.
  • Said basic and enhancement encoded streams are derived from an original multi-componentvideo signal in which each component corresponds to a sequence of pixel information quantified on a certain number of bits including a group of basic bits and one or more groups of enhancement bits.
  • the basic and enhancement encoded streams are obtained in the following manner:
  • the basic encoded stream 7_B results from the encoding of a basic multi-component video signal that comprises the basic bits
  • the enhancement streams 7_Ei result from the encoding of N enhancement multi-component video signals (where N is an integer and N > 1 ), each enhancement multi-component video signal comprising a group of enhancement bits.
  • the basic bits are the most significant bits of the pixel information and the enhancement bits are the least significant bits of the pixel information. However, this is not mandatory. Other ways of splitting the original multi- component video signal may be used.
  • the pixel information of the original multi-component video signal is quantified on 24 bits, it can be divided: - into a basic multi-components video signal comprising the 12 most significant bits of the pixel information (the 4 most significant bits of each component), and an enhancement multi- component video stream comprising the 12 least significant bits of the pixel information (the 4 least significant bits of each component), - or into a basic multi-component video signal comprising the 16 most significant bits of the pixel information (the 6 most significant bits of the luminance component and the 5 most significant bits of the two chrominance components), and an enhancement multi-component video stream comprising the remaining 8 least significant bits of the pixel information (the 2 least significant bits of the luminance component and the 3 least significant bits of the two chrominance components), - or into a basic multi-component video signal comprising the 8 most significant bits of the pixel information ( the 4 most significant bits of the luminance component and the 2 most significant bits of the two chrominance components), a first enhancement multi- componentvideo signal comprising the 12 most
  • the video transmission application 6b is designed so as to get the basic encoded stream 7_B and optionally, depending on a target display colour depth TCD, one or more enhancement encoded streams 7_Ei.
  • the target display colour depth is set by the video transmission application 6b.
  • the target display units are portable display units.
  • the target colour depth TCD is set to such a level that only the basic encoded stream is transmitted.
  • the application 6b is a video on demand service
  • the target colour depth TCD is sent to the application in the user request R.
  • Figure 2 is a block diagram of an example of an encoding device according to the invention. The encoding device of Figure 2 generates only one enhancement stream. This is not restrictive.
  • the encoding device of Figure 2 comprises a pre-processing block 12 for pre- processing the images IM produced by a video acquisition system (not represented here).
  • This pre-processing block 12 consists mainly in a RGB to YUV conversion.
  • the three-component video signal YUV is then applied to a colour depth splitting block 14.
  • the colour depth splitting block 14 outputs a basic three-component video signal YUV_B and an enhancement three-component video signal YUV_E1 applied to a first and a second encoding block 16 and 18, respectively.
  • the first and the second encoding block 16 and 18 output a basic encoded stream 7_B and an enhancement encoded stream 7_E1, respectively.
  • the colour depth splitting block 14 selects the Qi-Pj most significant bits of the pixel information for each component Q so as to generate the basic three-component video signal. It selects the Pj least significant bits of the pixel information for each component so as to generate the enhancement three-component video signal.
  • the encoders are MPEG encoders.
  • the basic and the enhancement encoded streams are linked at the image level (the VOP in MPEG terminology), that is the colour information bits of the k th pixel in the j th image of the basic encoded stream 7_B are the complement of the colour information bits of the same k th pixel of the same j* image of the enhancement encoded stream 7_E1.
  • Figure 3 is a schematic diagram of a terminal according to the invention. Here again, the represented terminal is designed to receive only one enhancement stream. However, this is not restrictive.
  • the terminal of Figure 3 comprises means 20 for recovering a basic encoded stream 7_B' and an enhancement encoded stream 7_E ⁇
  • the means 20 may be reception means for receiving the basic and the enhancement streams 7_B' and 7_E1 ' via the transmission channel 5.
  • the means 20 may also be reading means for reading the basic and the enhancement streams 7_B' and 7_E1 ' in a storage unit (for instance, an optical disc).
  • the recovered basic and enhancement streams 7_B' and 7_E1 ' are applied to a first and a second decoder22 and 24, respectively.
  • the decoder 22 outputs a basic recovered three-component video signal YUB_B' and the decoder 24 outputs an enhancement three-components video signal YUV_E1 '.
  • the basic and the enhancement recovered three-components video signals are applied to a merging block 26.
  • the merging block 26 outputs a combined recovered three-components video signal YUV_C.
  • the combined recovered three-component video signal YUV_C is applied to a post-processing block 28.
  • the output of the post -processing block 28 is applied to a display unit 30.
  • Figure 4 is a diagram summarizing the steps of a video generation method to be implemented in an encoding device according to the invention, the steps of a transmission method to be implemented in a transmitter according to the invention, and the step of a video recovery method to be implemented in a terminal according to the invention.
  • a video generation method comprises: a step PE1 of splitting the original multi-component video signal into several partial multi- component video signals, comprising at least a basic multi-component video signal YUN_B and an enhancement multi-component video signal YUV_E1; a step PE2 of encoding each of said partial multi-component video signals, so as to generate at least a basic encoded stream 7_B and an enhancement encoded stream 7_E1; - a transmission method comprises: a step TRl of getting said basic encoded stream 7_B alone or together with said enhancement encoded stream 7_E1 depending on a target display colour depth TCD set by the application 6b; a step TR2 of transmitting the basic encoded stream 7_B and, if applicable, the enhancement encoded stream 7_E1; - and said video recovery method comprises: a step POl of recovering a basic encoded stream and optionally an enhancement encoded stream; a step P02 of decoding all recovered streams so as to generate a recovered basic multi- component video signal
  • the transmission method and the video generation and recovery methods of the invention are implemented by way of specific hardware and/or software executed on processors located in a transmitter, in an encoding device, and in a terminal, respectively, according to the invention.
  • processors located in a transmitter, in an encoding device, and in a terminal, respectively, according to the invention are implemented by way of specific hardware and/or software executed on processors located in a transmitter, in an encoding device, and in a terminal, respectively, according to the invention.
  • transmitter, encoding device, terminal and storage unit modifications or improvements may be proposed without departing from the scope of the invention. The invention is thus not limited to the examples provided.

Abstract

The object of the invention is to provide colour depth scalability. It consists in splitting the original multi-components video signal (YUV) into several partial multi-component s video signals: at least a basic multi-components video signal and an enhancement multi-components video signals that are complementary to each other. Depending on a target display colour depth, the application will transmit: - only a basic encoded stream generated from the basic multi-components video signal, - or both the basic encoded stream and an enhancement encoded stream generated from the enhancement multi-component video signal. At the reception the received streams are decoded and the recovered multi-components video signals are combined for display.

Description

Transmission system with colour depth scalability
FIELD OF THE INVENTION
The invention relates, in general, to a system and a method for video encoding, transmission and display, with video scalability. It also relates to a storage unit storing a video content.
BACKGROUND OF THE INVENTION
US patent n° 6,263,022 assigned to Philips Electronics North America Corporation describes a video encoder allowing scalable video coding. Temporal, spatial and quality scalability are mentioned. Scalable video coding is presented in this patent as a desirable feature for applications and services employing decoders with a wide range of processing power. Scalability allows processors with low computational power to decode only a subset of the scalable video stream. Video scalability is also mentioned as a useful feature in environments with a variable transmission bandwidth. In these environments, receivers with low access bandwidth receive only a subset of the scalable video stream, where the amount of this subset is proportional to the available bandwidth. The invention proposes another type of scalability.
SUMMARY OF THE INVENTION
A transmission method according to the invention comprises the steps of: - getting a basic encoded stream and optionally, depending on a target display colour depth, one or more enhancement encoded streams, said basic and enhancement encoded streams being derived from an original multi-component-video signal in which each component corresponds to a sequence of pixel information quantified on a certain number of bits including a group of basic bits and one or more groups of enhancement bits, said basic encoded stream resulting from an encoding of a basic multi-component video signal comprising said basic bits, and said enhancement streams resulting from the encoding of one or more enhancement multi-component video signals comprising said enhancement bits,
- transmitting said encoded streams.
The colour depth of a display is defined as the number of bits used by the display unit to restore a pixel. Currently, the pixel information contained in the multi- component video signals is quantified on 24 bits (8 bits for each of the three components Y, U and V). This corresponds to the colour depth of the standard television display unit. However, other types of display units (like wireless mobile display units) may have smaller colour depths. For instance, display units may have a 12-bit colour depth (4 bits for each of the three components Y, U and V) or a 16-bit colour depth (6 bits for the luminance Y, and 5 bits for each of the chrominance components U and V). For such display units, an important part of the received information is not used. Transmitting information that is not used by the receiver is a waste of the transmission capacity. This is especially critical in wireless environments.
The transmission method of the invention allows achieving colour depth scalability. According to the invention, the transmitted video is adapted to a target display colour depth. If the display unit of the receiver has a small colour depth, only the basic encoded stream is transmitted. For receivers whose display unit has a higher colour depth, one or more enhancement encoded streams are transmitted with the basic encoded stream. The target display colour depth is set by the video transmission application. When the video is transmitted on request of the receiver, the target display colour depth is advantageously transmitted by the receiver in the request.
Colour depth scalability is also achieved with a transmission system as defined in claims 1 and 2, a video encoding device as defined in claim 4, a video generation method as defined in claim 5, a video recovery method as defined in claim 8, a terminal as defined in claim 7, and a storage unit as defined in claim 9.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention will be further described with reference to the following drawings, in which : - figure 1 is a block diagram of an example of a transmission system according to the invention,
- figure 2 is a block diagram of a video encoding device according to the invention,
- figure 3 is a block diagram of a terminal according to the invention, - figure 4 is a diagram showing the steps of a video generation method, a transmission method and a video recovery method according to the invention.
DESCRIPTION OF A PREFERRED EMBODIMENT As represented in Figure 1, a video transmission system 1 according to the invention comprises a transmitter 2, a receiving terminal 3 having a display, and a transmission channel 5. The transmitter 2 comprises a transmission/reception block 6a and it is hosting a video transmission application 6b.
For instance, the video transmission application 6b may be a video on demand service intended to transmit a video requested by a user. It may also be a conference broadcast service intended to transmit video to people who are attending a conference. It may also be an application run by a home server intended to transmit streams towards terminals or displays in the house (the transmitted streams may be recovered by the home server from the Internet or can be read from a storage unit, for instance, a disc). The video transmission application 6b has access to a content 7 generated by a video encoding device 8. The content 7 can be generated on the fly or it can be stored in a storage unit 9 and recovered from the storage unit 9. The video encoding device 8 and the storage unit 9 may be included in the transmitter 2 but this is not mandatory. The video encoding device 8 and the storage unit 9 may be located remote from the transmitter 2. For instance, the transmitter 2 may access the content 7 via the Internet network.
The content 7 comprises at least a basic encoded stream 7_B and one or more enhancement encoded streams 7_Ei (i=l to N). Said basic and enhancement encoded streams are derived from an original multi-componentvideo signal in which each component corresponds to a sequence of pixel information quantified on a certain number of bits including a group of basic bits and one or more groups of enhancement bits. The basic and enhancement encoded streams are obtained in the following manner:
- the basic encoded stream 7_B results from the encoding of a basic multi-component video signal that comprises the basic bits,
- the enhancement streams 7_Ei result from the encoding of N enhancement multi-component video signals (where N is an integer and N > 1 ), each enhancement multi-component video signal comprising a group of enhancement bits.
In the following description, the basic bits are the most significant bits of the pixel information and the enhancement bits are the least significant bits of the pixel information. However, this is not mandatory. Other ways of splitting the original multi- component video signal may be used.
For instance, if the pixel information of the original multi-component video signal is quantified on 24 bits, it can be divided: - into a basic multi-components video signal comprising the 12 most significant bits of the pixel information (the 4 most significant bits of each component), and an enhancement multi- component video stream comprising the 12 least significant bits of the pixel information (the 4 least significant bits of each component), - or into a basic multi-component video signal comprising the 16 most significant bits of the pixel information (the 6 most significant bits of the luminance component and the 5 most significant bits of the two chrominance components), and an enhancement multi-component video stream comprising the remaining 8 least significant bits of the pixel information (the 2 least significant bits of the luminance component and the 3 least significant bits of the two chrominance components), - or into a basic multi-component video signal comprising the 8 most significant bits of the pixel information ( the 4 most significant bits of the luminance component and the 2 most significant bits of the two chrominance components), a first enhancement multi- componentvideo signal comprising the next 8 most significant bits of the pixel information (the 4 next significant bits of the luminance component and the 2 next significant bits of the two chrominance components), and a second enhancement multi-componentvideo signal comprising the remaining 8 least significant bits of the pixel information (the 4 least significant bits of the luminance component and the 2 least significant bits of the two chrominance components). Of course, these are only examples and other types of subdivision could be used. According to the invention, the video transmission application 6b is designed so as to get the basic encoded stream 7_B and optionally, depending on a target display colour depth TCD, one or more enhancement encoded streams 7_Ei. The target display colour depth is set by the video transmission application 6b. For instance, when the application 6b is a conference broadcast service, the target display units are portable display units. Thus, the target colour depth TCD is set to such a level that only the basic encoded stream is transmitted. When the application 6b is a video on demand service, the target colour depth TCD is sent to the application in the user request R. Figure 2 is a block diagram of an example of an encoding device according to the invention. The encoding device of Figure 2 generates only one enhancement stream. This is not restrictive.
The encoding device of Figure 2 comprises a pre-processing block 12 for pre- processing the images IM produced by a video acquisition system (not represented here). This pre-processing block 12 consists mainly in a RGB to YUV conversion.
The three-component video signal YUV is then applied to a colour depth splitting block 14. The colour depth splitting block 14 outputs a basic three-component video signal YUV_B and an enhancement three-component video signal YUV_E1 applied to a first and a second encoding block 16 and 18, respectively. The first and the second encoding block 16 and 18 output a basic encoded stream 7_B and an enhancement encoded stream 7_E1, respectively.
It is also possible to use a single encoding block and to process the basic and the enhancement three-component video signals successively. The pixel information carried in each component Q (with Cι=Y, C2=U and
C =V) is quantified on Q\ bits. The colour depth splitting block 14 selects the Qi-Pj most significant bits of the pixel information for each component Q so as to generate the basic three-component video signal. It selects the Pj least significant bits of the pixel information for each component so as to generate the enhancement three-component video signal. Advantageously the encoders are MPEG encoders. The basic and the enhancement encoded streams are linked at the image level (the VOP in MPEG terminology), that is the colour information bits of the kth pixel in the jth image of the basic encoded stream 7_B are the complement of the colour information bits of the same kth pixel of the same j* image of the enhancement encoded stream 7_E1. Figure 3 is a schematic diagram of a terminal according to the invention. Here again, the represented terminal is designed to receive only one enhancement stream. However, this is not restrictive.
The terminal of Figure 3 comprises means 20 for recovering a basic encoded stream 7_B' and an enhancement encoded stream 7_E\ For instance, the means 20 may be reception means for receiving the basic and the enhancement streams 7_B' and 7_E1 ' via the transmission channel 5. The means 20 may also be reading means for reading the basic and the enhancement streams 7_B' and 7_E1 ' in a storage unit (for instance, an optical disc). The recovered basic and enhancement streams 7_B' and 7_E1 ' are applied to a first and a second decoder22 and 24, respectively. The decoder 22 outputs a basic recovered three-component video signal YUB_B' and the decoder 24 outputs an enhancement three-components video signal YUV_E1 '. The basic and the enhancement recovered three-components video signals are applied to a merging block 26. The merging block 26 outputs a combined recovered three-components video signal YUV_C. The combined recovered three-component video signal YUV_C is applied to a post-processing block 28. The output of the post -processing block 28 is applied to a display unit 30.
It is also possible to use a single decoding block and to process the basic and the enhancement encoded streams successively.
Figure 4 is a diagram summarizing the steps of a video generation method to be implemented in an encoding device according to the invention, the steps of a transmission method to be implemented in a transmitter according to the invention, and the step of a video recovery method to be implemented in a terminal according to the invention. According to Figure 4, a video generation method comprises: a step PE1 of splitting the original multi-component video signal into several partial multi- component video signals, comprising at least a basic multi-component video signal YUN_B and an enhancement multi-component video signal YUV_E1; a step PE2 of encoding each of said partial multi-component video signals, so as to generate at least a basic encoded stream 7_B and an enhancement encoded stream 7_E1; - a transmission method comprises: a step TRl of getting said basic encoded stream 7_B alone or together with said enhancement encoded stream 7_E1 depending on a target display colour depth TCD set by the application 6b; a step TR2 of transmitting the basic encoded stream 7_B and, if applicable, the enhancement encoded stream 7_E1; - and said video recovery method comprises: a step POl of recovering a basic encoded stream and optionally an enhancement encoded stream; a step P02 of decoding all recovered streams so as to generate a recovered basic multi- component video signal YUV_B' and, if applicable, a recovered enhancement multi- component video signal YUV_E1 '; and when applicable, a step P03 of combining said recovered basic multi-component video signal YUV_B' and said recovered enhancement multi-component video signal YUV_E1 ' so as to generate a combined recovered multi-component video signal YUN_C. The transmission method and the video generation and recovery methods of the invention are implemented by way of specific hardware and/or software executed on processors located in a transmitter, in an encoding device, and in a terminal, respectively, according to the invention. With respect to the described methods, transmitter, encoding device, terminal and storage unit, modifications or improvements may be proposed without departing from the scope of the invention. The invention is thus not limited to the examples provided.
Use of the verb "comprise" and its conjugations does not exclude the presence of elements or steps other than those stated in the claims. Use of the article "a" or "an" preceding an element or step does not exclude the presence of a plurality of such elements or steps.

Claims

1. A transmission system comprising a transmitter hosting a video transmission application, a transmission channel, and a receiving terminal having a display unit, said video transmission application being designed so as to:
- get a basic encoded stream and optionally, depending on a target display colour depth, one or more enhancement encoded streams, in view of their transmission, said basic and enhancement encoded streams being derived from an original multi-component video signal in which each component corresponds to a sequence of pixel information quantified on a certain number of bits including a group of basic bits and one or more groups of enhancement bits, said basic encoded stream resulting from an encoding of a basic multi-component video signal comprising said basic bits, and said enhancement encoding streams resulting from the encoding of one or more enhancement multi-component video signals comprising said enhancement bits;
- transmit said encoded streams.
2. A transmission system as claimed in claim 1, wherein the target display colour depth is set by the receiving terminal.
3. A transmission method comprising the steps of:
- getting a basic encoded stream and optionally, depending on a target display colour depth, one or more enhancement encoded streams, said basic and enhancement encoded streams being derived from an original multi-component video signal in which each component corresponds to a sequence of pixel information quantified on a certain number of bits including a group of basic bits and one or more groups of enhancement bits, said basic encoded stream resulting from an encoding of a basic multi-component video signal comprising said basic bits, and said enhancement encoded streams resulting from the encoding of one or more enhancement multi-component video signals comprising said enhancement bits,
- transmitting said encoded streams.
4. A video encoding device for encoding an original multi-component video signal in which each component corresponds to a sequence of pixel information quantified on a certain number of bits including a group of basic bits and one or more groups of enhancement bits, said encoder comprising: - a splitter for splitting said original multi-component video signal into at least a basic multi- component video signal comprising said basic bits, and one or more enhancement multi- component video signals comprising said enhancement bits,
- at least one video encoder for encoding separately said basic and said enhancement multi- component video signals so as to generate a basic encoded stream and one or more enhancement encoded streams.
5. A video generation method comprising the steps of:
- processing an original multi-component video signal in which each component corresponds to a sequence of pixel information quantified on a certain number of bits including a group of basic bits and one or more groups of enhancement bits, so as to generate a basic multi- component video signal comprising said basic bits, and at least one enhancement multi- component video signal comprising said enhancement bits, , - encoding separately said basic and said enhancement multi-component video signals so as to generate a basic encoded stream and one or more enhancement encoded streams.
6. A program comprising instructions for implementing a video generation method as claimed in claim 5 when said program is executed by a processor.
7. A terminal comprising: - means for recovering a basic encoded stream and one or more enhancement encoded streams, said basic and enhancement encoded streams being derived from an original multi- component video signal in which each component corresponds to a sequence of pixel information quantified on a certain number of bits including a group of basic bits and one or more groups of enhancement bits, said basic encoded stream resulting from an encoding of a basic multi-component video signal comprising said basic bits, and said enhancement streams resulting from the encoding of one or more enhancement multi-component video signals comprising said enhancement bits,
- means for decoding said basic and said enhancement encoded streams so as to generate a basic and one or more enhancement recovered multi-component video signals, - means for combining said basic and said enhancement recovered multi-components video signals so as to generate a combined recovered multi-component video signals,
- means for displaying said combined recovered multi-component video signal.
8. A video recovery method comprising the steps of:
- decoding a basic and one or more enhancement encoded streams so as to generate a basic and one or more enhancement recovered multi-component video signals, said basic and enhancement encoded streams being derived from an original multi-component video signal in which each component corresponds to a sequence of pixel information quantified on a certain number of bits including a group of basic bits and one or more groups of enhancement bits, said basic encoded stream resulting from an encoding of a basic multi-component video signal comprising said basic bits, and said enhancement streams resulting from the encoding of one or more enhancement multi-component video signals comprising said enhancement bits, - combining said basic and said enhancement recovered multi-component video signals so as to generate a combined recovered multi-component video signal.
9. A storage unit storing a content comprising at least a basic encoded stream and one or more enhancement encoded streams, said basic and enhancement encoded streams being derived from an original multi-component video signal in which each component corresponds to a sequence of pixel information quantified on a certain number of bits including a group of basic bits and one or more groups of enhancement bits, said basic encoded stream resulting from an encoding of a basic multi-component video signal comprising said basic bits, and said enhancement streams resulting from the encoding of one or more enhancement multi-component video signals comprising said enhancement bits, said basic encoded stream being intended to be used alone or together with one or more of said enhancement encoded streams depending on a target display colour depth.
PCT/IB2003/004995 2002-11-13 2003-11-04 Transmission system with colour depth scalability WO2004045217A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2003278445A AU2003278445A1 (en) 2002-11-13 2003-11-04 Transmission system with colour depth scalability

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP02292818.8 2002-11-13
EP02292818 2002-11-13

Publications (1)

Publication Number Publication Date
WO2004045217A1 true WO2004045217A1 (en) 2004-05-27

Family

ID=32309480

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2003/004995 WO2004045217A1 (en) 2002-11-13 2003-11-04 Transmission system with colour depth scalability

Country Status (2)

Country Link
AU (1) AU2003278445A1 (en)
WO (1) WO2004045217A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1827024A1 (en) * 2006-02-24 2007-08-29 Sharp Kabushiki Kaisha High dynamic range video coding
WO2007137063A2 (en) * 2006-05-22 2007-11-29 Hewlett-Packard Development Company, L.P. Compressed data
WO2008049446A1 (en) * 2006-10-25 2008-05-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Quality scalable coding
WO2009034424A2 (en) * 2007-09-14 2009-03-19 Dooworks Fz Co Method and system for processing of images
WO2009024744A3 (en) * 2007-08-17 2009-06-11 Imagination Tech Ltd A method and system for data compression
EP2070327A1 (en) * 2006-09-30 2009-06-17 THOMSON Licensing Method and device for encoding and decoding color enhancement layer for video
EP2076038A2 (en) * 2007-12-20 2009-07-01 Broadcom Corporation Video processing system with layered video coding and methods for use therewith
EP2109317A1 (en) * 2008-04-10 2009-10-14 Sony Corporation Improving video robustness using spatial and temporal diversity
US20100208810A1 (en) * 2007-10-15 2010-08-19 Thomson Licensing Method and apparatus for inter-layer residue prediction for scalable video
GB2482264A (en) * 2007-09-14 2012-01-25 Doo Technologies Fzco Combining reduced bit depth image data streams into a single, merged stream
US8311098B2 (en) 2007-12-19 2012-11-13 Broadcom Corporation Channel adaptive video transmission system for use with layered video coding and methods for use therewith
TWI382747B (en) * 2007-02-09 2013-01-11 Gentex Corp Improved imagine device
US8416848B2 (en) 2007-12-21 2013-04-09 Broadcom Corporation Device adaptive video transmission system for use with layered video coding and methods for use therewith
WO2013072889A1 (en) * 2011-11-18 2013-05-23 Koninklijke Philips Electronics N.V. Encoding high quality (medical) images using standard lower quality (web) image formats
US8520737B2 (en) 2008-01-04 2013-08-27 Broadcom Corporation Video processing system for scrambling layered video streams and methods for use therewith
US8594191B2 (en) 2008-01-03 2013-11-26 Broadcom Corporation Video processing system and transcoder for use with layered video coding and methods for use therewith
WO2014002422A1 (en) * 2012-06-29 2014-01-03 Canon Kabushiki Kaisha Image encoding apparatus, image encoding method and program, image decoding apparatus, and image decoding method and program
CN104365094A (en) * 2012-06-14 2015-02-18 Kddi株式会社 Video encoding device, video decoding device, video encoding method, video decoding method, and program
US8995525B2 (en) 2008-04-16 2015-03-31 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Bit-depth scalability
US9078024B2 (en) 2007-12-18 2015-07-07 Broadcom Corporation Video processing system with user customized graphics for use with layered video coding and methods for use therewith
US9143731B2 (en) 2008-01-02 2015-09-22 Broadcom Corporation Mobile video device for use with layered video coding and methods for use therewith

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0594338A2 (en) * 1992-10-22 1994-04-27 International Business Machines Corporation Video decompression apparatus and method
US6263022B1 (en) * 1999-07-06 2001-07-17 Philips Electronics North America Corp. System and method for fine granular scalable video with selective quality enhancement
EP1130506A2 (en) * 2000-02-24 2001-09-05 Eastman Kodak Company Method and device for presenting digital images on a low-definition screen
WO2001091454A2 (en) * 2000-05-25 2001-11-29 Koninklijke Philips Electronics N.V. Bit-plane dependent signal compression
WO2002009429A2 (en) * 2000-07-26 2002-01-31 Eyeball Network Games Inc. System and method for adaptable, scalable multimedia broadcasting over a network
US20020136292A1 (en) * 2000-11-01 2002-09-26 Webcast Technologies, Inc. Encoding and decoding of video signals

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0594338A2 (en) * 1992-10-22 1994-04-27 International Business Machines Corporation Video decompression apparatus and method
US6263022B1 (en) * 1999-07-06 2001-07-17 Philips Electronics North America Corp. System and method for fine granular scalable video with selective quality enhancement
EP1130506A2 (en) * 2000-02-24 2001-09-05 Eastman Kodak Company Method and device for presenting digital images on a low-definition screen
WO2001091454A2 (en) * 2000-05-25 2001-11-29 Koninklijke Philips Electronics N.V. Bit-plane dependent signal compression
WO2002009429A2 (en) * 2000-07-26 2002-01-31 Eyeball Network Games Inc. System and method for adaptable, scalable multimedia broadcasting over a network
US20020136292A1 (en) * 2000-11-01 2002-09-26 Webcast Technologies, Inc. Encoding and decoding of video signals

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MALAK M ET AL: "An image database for low bandwidth communication links", DATA COMPRESSION CONFERENCE, 1991. DCC '91. SNOWBIRD, UT, USA 8-11 APRIL 1991, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 8 April 1991 (1991-04-08), pages 53 - 62, XP010034260, ISBN: 0-8186-9202-2 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1827024A1 (en) * 2006-02-24 2007-08-29 Sharp Kabushiki Kaisha High dynamic range video coding
WO2007137063A2 (en) * 2006-05-22 2007-11-29 Hewlett-Packard Development Company, L.P. Compressed data
WO2007137063A3 (en) * 2006-05-22 2008-02-28 Hewlett Packard Development Co Compressed data
EP2070327A4 (en) * 2006-09-30 2012-08-22 Thomson Licensing Method and device for encoding and decoding color enhancement layer for video
EP2070327A1 (en) * 2006-09-30 2009-06-17 THOMSON Licensing Method and device for encoding and decoding color enhancement layer for video
WO2008049446A1 (en) * 2006-10-25 2008-05-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Quality scalable coding
US8774269B2 (en) 2006-10-25 2014-07-08 Franuhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Quality scalable coding with mapping different ranges of bit depths
US9843800B2 (en) 2006-10-25 2017-12-12 Ge Video Compression, Llc Quality scalable coding with mapping different ranges of bit depths
US11115651B2 (en) 2006-10-25 2021-09-07 Ge Video Compression, Llc Quality scalable coding with mapping different ranges of bit depths
US10659776B2 (en) 2006-10-25 2020-05-19 Ge Video Compression, Llc Quality scalable coding with mapping different ranges of bit depths
EP3484154A1 (en) * 2006-10-25 2019-05-15 GE Video Compression, LLC Quality scalable coding
US10165269B2 (en) 2006-10-25 2018-12-25 Ge Video Compression, Llc Quality scalable coding with mapping different ranges of bit depths
TWI382747B (en) * 2007-02-09 2013-01-11 Gentex Corp Improved imagine device
US8116579B2 (en) 2007-08-17 2012-02-14 Imagination Technologies Limited Method and system for data compression
WO2009024744A3 (en) * 2007-08-17 2009-06-11 Imagination Tech Ltd A method and system for data compression
EP2512137A3 (en) * 2007-08-17 2013-02-13 Imagination Technologies Limited A method and system for data compression
WO2009034424A3 (en) * 2007-09-14 2009-05-07 Dooworks Fz Co Method and system for processing of images
GB2482264B (en) * 2007-09-14 2012-04-18 Doo Technologies Fzco Method and system for processing of images
GB2482264A (en) * 2007-09-14 2012-01-25 Doo Technologies Fzco Combining reduced bit depth image data streams into a single, merged stream
WO2009034424A2 (en) * 2007-09-14 2009-03-19 Dooworks Fz Co Method and system for processing of images
US8385412B2 (en) * 2007-10-15 2013-02-26 Thomson Licensing Method and apparatus for inter-layer residue prediction for scalable video
US20100208810A1 (en) * 2007-10-15 2010-08-19 Thomson Licensing Method and apparatus for inter-layer residue prediction for scalable video
US8537894B2 (en) * 2007-10-15 2013-09-17 Thomson Licensing Methods and apparatus for inter-layer residue prediction for scalable video
US20100208809A1 (en) * 2007-10-15 2010-08-19 Thomson Licensing Methods and apparatus for inter-layer residue prediction for scalable video
US9078024B2 (en) 2007-12-18 2015-07-07 Broadcom Corporation Video processing system with user customized graphics for use with layered video coding and methods for use therewith
US8311098B2 (en) 2007-12-19 2012-11-13 Broadcom Corporation Channel adaptive video transmission system for use with layered video coding and methods for use therewith
EP2076038A2 (en) * 2007-12-20 2009-07-01 Broadcom Corporation Video processing system with layered video coding and methods for use therewith
EP2076038A3 (en) * 2007-12-20 2012-03-14 Broadcom Corporation Video processing system with layered video coding and methods for use therewith
US9210480B2 (en) 2007-12-20 2015-12-08 Broadcom Corporation Video processing system with layered video coding and methods for use therewith
US8416848B2 (en) 2007-12-21 2013-04-09 Broadcom Corporation Device adaptive video transmission system for use with layered video coding and methods for use therewith
US9143731B2 (en) 2008-01-02 2015-09-22 Broadcom Corporation Mobile video device for use with layered video coding and methods for use therewith
US8594191B2 (en) 2008-01-03 2013-11-26 Broadcom Corporation Video processing system and transcoder for use with layered video coding and methods for use therewith
US8520737B2 (en) 2008-01-04 2013-08-27 Broadcom Corporation Video processing system for scrambling layered video streams and methods for use therewith
EP2109317A1 (en) * 2008-04-10 2009-10-14 Sony Corporation Improving video robustness using spatial and temporal diversity
US8995525B2 (en) 2008-04-16 2015-03-31 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Bit-depth scalability
US10958936B2 (en) 2008-04-16 2021-03-23 Ge Video Compression, Llc Bit-depth scalability
US11711542B2 (en) 2008-04-16 2023-07-25 Ge Video Compression, Llc Bit-depth scalability
US9342654B2 (en) 2011-11-18 2016-05-17 Koninklijke Philips N.V. Encoding high quality (medical) images using standard lower quality (web) image formats
WO2013072889A1 (en) * 2011-11-18 2013-05-23 Koninklijke Philips Electronics N.V. Encoding high quality (medical) images using standard lower quality (web) image formats
CN104365094A (en) * 2012-06-14 2015-02-18 Kddi株式会社 Video encoding device, video decoding device, video encoding method, video decoding method, and program
CN104365094B (en) * 2012-06-14 2018-05-18 Kddi株式会社 Moving picture encoding device, animated image decoding apparatus, moving picture encoding method, animated image coding/decoding method and program
WO2014002422A1 (en) * 2012-06-29 2014-01-03 Canon Kabushiki Kaisha Image encoding apparatus, image encoding method and program, image decoding apparatus, and image decoding method and program

Also Published As

Publication number Publication date
AU2003278445A1 (en) 2004-06-03

Similar Documents

Publication Publication Date Title
WO2004045217A1 (en) Transmission system with colour depth scalability
US8646014B2 (en) Multistream video communication with staggered access points
US7836193B2 (en) Method and apparatus for providing graphical overlays in a multimedia system
US7634147B2 (en) Fingerprinting digital video for rights management in networks
KR101224097B1 (en) Controlling method and device of multi-point meeting
US7136066B2 (en) System and method for scalable portrait video
US7016412B1 (en) System and method for dynamic adaptive decoding of scalable video to balance CPU load
US20060192848A1 (en) Video conferencing system
US8571027B2 (en) System and method for multi-rate video delivery using multicast stream
EP1145561A1 (en) System and method for encoding and decoding the residual signal for fine granular scalable video
CN1801885A (en) Multimedia signal matching system and method for performing picture-in-picture function
US20040001091A1 (en) Method and apparatus for video conferencing system with 360 degree view
US7454692B2 (en) Encoder based error resilience method in a video codec
CN101237583A (en) A decoding and coding method and device for multiple screen
US20130291011A1 (en) Transcoding server and method for overlaying image with additional information therein
US11375171B2 (en) System and method for preloading multi-view video
US6337882B1 (en) Method and apparatus for generating unlimited selected image views from a larger image
CN209949313U (en) Signal transmission system, signal encoding device, and signal decoding device
CN114640849B (en) Live video encoding method, device, computer equipment and readable storage medium
CN117354524B (en) Method, device, equipment and computer medium for testing coding performance of encoder
Duong et al. HEVC based distributed scalable video coding for surveillance visual system
US20040179136A1 (en) Image transmission system and method thereof
KR20020032862A (en) An object-based multimedia service system and a service method using a moving picture encoding
Chiang et al. Compatible coding of digital interlaced HDTV using prediction of the even fields from the odd fields
CN112422994A (en) Network television live broadcasting system and method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP