TECHNICAL GLOSSARY

From Gary P Hayes
Jump to: navigation, search

Bluetooth

The new wireless standard for connecting cameras, PDAs, laptops, computers and cell phones. Uses very high frequency radio waves. Bluetooth provides up to720 Kbps data transfer within a range of 10meters and up to 100 meters with a power boost. Unlike IrDA, which requires that devices be aimed at each other (line of sight), Bluetooth uses omni directional radio waves that can transmit through walls and other non-metal barriers. If tere is interference from oher devices, the transmission does not stop, but its speed is downgraded. The name Bluetooth comes from King Harald Blatan (Bluetooth) of Denmark. In the 10th Century, he began the Christianise the country. Ericsson was the first to develop this specification.

CG

a) Short for "Computer Generated" and usually refers to special effects shots. If someone says, "The background was all CG." it means that the background image was done in a computer.

b) Acronym for Computer Graphics

c) Acronym for Character Generator

CGI

a) Common Gateway Interface - An interface creation scripting program that allows you to make WWW pages on the fly, based on information from various types of fill-in forms, checkboxes and text input forms, etc.

b) Abbreviation for Computer Graphic Imagery or Computer Generated Imagery and simply means that an image was produced using a computer.

Codec

a) Coder-Decoder - Hardware or software that converts analog sound, speech or video to digital code and vice versa (analog to digital - digital to analog). Codecs must faithfully reproduce the original signal, but they must also compress the binary code to the smallest number of bits possible in order to transmit faster. As network bandwidth increases, so does the demand for more audio and video, so compression is always an issue. Codecs can be software or hardware. Software codecs are installed into audio and video editing programs as well as media players that download audio and video over the Web. Software codecs rely entirely on the PC for processing. Hardware codecs are specialized chips built into digital telephones and videoconferencing stations to maximize performance. Although hardware codecs are faster than software routines, faster desktop machines are increasingly enabling software codecs to perform quite adequately.

b) Compressor - Decompressor Hardware or software that compresses digital data into a smaller binary format than the original. This function is built into the audio and video codec in definition above, thus, a true compressor/decompressor is supposed to refer to codecs that perform only compression and possibly encryption, but not analog to digital and digital to analog conversion. Some popular codecs for computer video include MPEG, Indeo, Cinepak, QuickTime and Video for Windows.

Compositing

Simultaneous multi-layering and design for moving pictures. Modern video designs often use many techniques together, such as painting, retouching, rotoscoping, keying/matting, digital effects and color correction as well as multi-layering to create complex animations and opticals for promotions, title sequences and commercials as well as in program content. Besides the creative element there are other important applications for compositing equipment such as image repair, glass painting and wire removal- especially in motion pictures. The quality of the finished work, and therefore the equipment, can be crucial especially where seamless results are demanded. For example, adding a foreground convincingly over a background - placing an actor into a scene -without any tell-tale blue edges or other signs that the scene is composed. The better the compositing results are supposed to be, the higher the system requirements. High-end applications require 8 or 10 bit signal processing to maintain the quality throughout various generations and to ensure high-quality chroma keys. The quality of a compositing system also depends on the number of layers, i.e. focal planes that can be simultaneously processed and on how fast the composition can be rendered. Sometimes also called "vertical editing"

Compression (video)

The process of reducing the bandwidth or data rate of a video stream. The analogue broadcast standards used today, PAL, NTSC and SECAM are, in fact, compression systems which reduce the data content of their original RGB sources. Digital compression systems analyze their picture sources to find and remove redundancy both within and between picture frames. The techniques were primarily developed for digital data transmission but have been adopted as a means of reducing transmission bandwidths and storage requirements on disks and VTRs. A number of compression techniques are in regular use, these include ETSI, JPEG, Motion JPEG, DV,MPEG-1, MPEG-2 and MPEG-4. Where different techniques are used in the same stream, problems can occur and picture quality can suffer more than if the same method is used throughout. The MPEG-2 family of compression schemes which was designed for program transmission is also being targeted for studio applications via "StudioMPEG". While there is much debate, and new technologies continue to be developed, it remains true that the best compressed results are produced from the highest quality source pictures. Poor inputs do not compress well. Noise, which may be interpreted as important picture detail, is the enemy of compression.

Computer Animation

Use of computers to create animations. There are a few different ways to make computer animations. One is 3D animation. One way to create computer animations is to create objects and then render them. This method produces perfect and three-dimensional looking animations. Another way to create computer animation is to use standard computer painting tools and to paint single frames and composite them. These can later be either saved as a movie file or output to video. One last method of making computer animations is to use transitions and other special effects like morphing to modify existing images and video.

Computer Graphics

Any types of images created using any kind of computer. There is a vast amount of types of images a computer can create. Also, there are just as many ways of creating those images. Images created by computers can be very simple, such as lines and circles, or extremely complex such as fractals and complicated rendered animations.

DivX

a) DivX is MPEG-4 based video compression technology that can shrink digital video to sizes small enough to be transported over the Internet, while maintaining high visual quality. MPEG-4 is a new standard of video compression that is both high quality and low bit rate. They are usually only a fraction (around 15%) of the size of a standard DVD, even at 640x480 resolutions, making them the best home video format thus far. They only take half the time to encode, and yet at the same time is smaller in size thanMPEG-1 - due to their incredible compression technology - some have even called MPEG-4the "MP3 of the video world". Quality ranges from net-streaming quality to DVD and better.

b) Digital Video EXpress Originally, a DVD rental system that was rolled out in June of 1998 and taken off the market one year later due to lack of sales. A joint venture of83Circuit City and a Los Angeles entertainment law firm, Divx used special discs and a special DVD machine that played Divx movies and regular DVD movies. Consumers paid a rental charge for the Divx disc, which lasted for two days after the first viewing. Additional playing time or unlimited use could be purchased by credit card via the modem in the Divx player, or the disc was thrown away. One of the problems with the system was thatthe Divx disc was registered to the specific machine that sent in the registration and could not be played on other Divx players. All remaining Divx discs could not be upgraded to unlimited use, but could be viewed until June 30, 2001.

c) A copy of the DVD copy protection decryption algorithm that was lifted from a media player and passed around hacker sites. The Divx name was used in honor of a "defunct" system.

GB

a) GigaByte (GB) One billion bytes (technically 1,073,741,824 bytes).

b) Giga-Bit (Gb) One billion bits (technically 1,073,741,824 bits). Lower case "b" for bit and "B" for byte are not always followed and often misprinted. Thus, Gb may refer to gigabyte.

Gbps - A measurement of data transfer capacity on a computer. Gigabits per-second.

Megapixel

CCD resolution of one million pixels. Digicams are commonly rated by Megapixels. The higher geometric pixel resolution of these sensors produces higher quality digital photographic images. You multiply the horizontal resolution by the vertical resolution to get the total pixel count - 1280 x 960pixels = 1 Megapixel; 1600 x 1200 pixels = 2Megapixels; 2048 x 1536 pixels = 3 Megapixels

Metadata

Data about data. Data about the video and audio but not the video or audio themselves. This is important for labeling and finding data - either in a "live" data stream or an archive. Within the studio and in transmission digital technology allows far more information to be added. Some believe metadata will revolutionize every aspect of production and distribution. Metadata existed long before digital networks, video time code and film frame numbers are but two examples.

Metafile

A file containing information that describes or specifies another file. It generally refers to graphics files that can hold vector drawings and bitmaps. For example, Windows Metafiles (WMFs) and Enhanced Metafiles (EMFs) can store pictures in vector graphics and bitmap formats as well as text. A Computer Graphics Metafile (CGM) also stores both types of graphics.

Resolution

a) Detail. In digital video and audio, the number of bits (four, eight, 10, 12,etc.) determines the resolution of the digital signal. Four bits yields a resolution of one in16. Eight bits yields a resolution of one in 256. Ten bits yields a resolution of one in 1,024. Eight bits is the minimum acceptable for broadcast television.

b) A measure of the finest detail that can be seen, or resolved, in a reproduced image. While influenced by the number of pixels in an image (for high definition approximately 2,000 x 1,000,broadcast NTSC TV 720 x 487, broadcast PAL TV 720 x 576), note that the pixel numbers do not define ultimate resolution but merely the resolution of that part of the equipment. The quality of lenses, display tubes, film process and film scanners, etc., used to produce the image on the screen must all be taken into account. This is why alive broadcast of the Olympic Games looks better than a broadcast recorded and played off of VHS, while all are NTSC or PAL.

Sampling

Process by which an analog signal is measured, often millions of times per second for video, in order to convert the analog signal to digital. The official sampling standard for standard definition television is 223ITU-R 601. For TV pictures eight or 10 bits are normally used; for sound, 16 or 20-bitsare common, and 24-bits are being introduced. The ITU-R 601 standard defines the sampling of video components based on 13.5MHz, and AES/EBU defines sampling of 44.1and 48 kHz for audio.

Scaling

a) Scaling is the act of changing the resolution of an image. For example, scaling a 640 x 480 image by one-half results in a320 x 240 image. Scaling by 2x results in an image that is 1280 x 960. There are many different methods for image scaling, and some "look" better than others. In general, though, the better the algorithm "looks", the more expensive it is to implement.

b) Analog video signals have to be scaled prior to digitizing in an ADC so that the full amplitude of the signal makes best use of the available levels in the digital system. The ITU-RBT.601 digital coding standard specifies, when using 8 bits, black to be set at level 16and white at 235. The same range of values is ascribed should RGB be used. Computer systems tend to operate with a different scaling black set to level 0 and white at 255. For color they usually use RGB from 0-255.Clearly, going between computers and TV requires processing to change color space and scaling.

Streaming

Sending live or on-demand video or audio broadcast over the Internet. Popular streaming video formats include RealVideo, QuickTime and WMV.

Streaming Video

A one-way video transmission over a data network. It is widely used on the Web as well as private intranets to deliver video on demand or a video broadcast. Unlike movie files (MPG, AVI, etc.) that are played after they are downloaded, streaming video is played within a few seconds of requesting it, and the data is not stored permanently in the computer. Watching momentary blips in a video are annoying, and the only way to compensate for that over an erratic network such as the Internet is to get extra data in before you start. In streaming video, both the client and server cooperate for uninterrupted motion. The client side buffers a few seconds of video data before it starts sending it to the screen and speakers and tries to keep ahead of itself throughout the streaming session. If the streaming video is broadcast live, then some might consider edit "real-time video." However, real-time means no delays, and there is a built-in delay in streaming video.