Thursday, February 19, 2015

DVI-D to VGA Converters

In fact, even though digital monitors and computer systems have become affordable, some legacy software or hardware systems can only work with VGA monitors. Sometimes, the software needs to be upgraded or even be replaced. Generally speaking, the cost of upgrading the software will be rather high. 

Therefore, people are in need of finding a most affordable solution in terms of dealing with such problem. Fortunately, the DVI-D to VGA converter has been invented to help people solve this tough issue. DVI, short for digital visual interface, is the standard interface for high-performance connection between PCs and flat panel displays, CRT displays, projectors, and HDTV. 

DVI converters are designed by people in order to convert video signals from one device to another device with a non-compatible signal format. The source computer connection and target monitor connection have great influence on your choice of converter. To be honest, it won’t work if you just buy some cheap DVI to VGA adapter plug. To cope with the compatibility issue, the first step you are supposed to take is to identify what your computer uses, namely, whether it uses DVI-D, DVI-A, DVI-I, VGA or HDMI connection. Next, you should figure out the video connection type on monitor or projector.

Generally speaking, a DVI-D to VGA Converter is much more preferred by people as it is a very affordable alternative in comparison with costly software or hardware upgrades. If you don’t want to replace or upgrade operational VGA monitors and legacy systems, a DVI-D to VGA converter is an ideal solution. On the other hand, this can also help you save more money in the future for it can postpone other costly expenditures. Above, such converters really help to extend the life of your equipment and save your money to a rather large degree.

Wednesday, February 18, 2015

VGA to DVI Converter Box

Generally speaking, analog and digital signals are delivered in quite different modes. As a result, there will be some compatibility issues between electronic products. The VGA to DVI conversion is necessary when the analog signal has to be converted into digital signals.

Mixing analog and digital signals require a supporting device. We can take a VGA graphics card and a digital flat screen sign as good examples. The VGA to DVI-D converter is such a device, which is necessary equipment when you are not able to change the graphics card or the computer. People can get important information such as maps, directions or notifications from the digital center. Such display tools are very common in hospital or airport. To be honest, this is similar to a bulletin board. The only difference is that it has a screen used to present text data and images, which can make it seem like a television display. As a matter of fact, such presentation on the digital message center is changed through some program using software on a control station, typically a computer.

When the signage has to be upgraded, some change will occur in the control station. When the new signage is digital while the controlling computer has a VGA connection, the compatibility issue will emerge. Therefore, a VGA to DVI conversion is required to realize in order to make the sign to be properly received and the signal sent from the control station to be properly presented.

Here is the good news. The VGA to DVI conversion is far more readily available and affordable than upgrading the hardware on the control station. In fact, the operating system and software have to be upgraded to accommodate the new hardware under some circumstances. A VGA to DVI-D converter is much more economical and faster than upgrading hardware and software on the control station.

Tuesday, February 17, 2015

Decide Between VGA, DVI, and HDMI for Your Monitor Connection

It is of great importance for people to know exactly what type of output ports the computer supports when they are deciding to buy it. After all, what they want is just to be able easily hook up to the computer monitor, or the projector no matter where they are. Therefore, it is advisable for people to make their decisions in a careful way after serious considerations.

As a matter of fact, there is no clearcut standard for video connectors. Therefore, most projectors and even lots of displays have multiple input ports. Generally speaking, there are multiple standards of computer video cables, namely, VGA, DVI and HDMI. In effect, each of them has its own advantage and they are still in competition in order to win people’s favor.

VGA cables can only carry analog signals. As a result, the analog signal sometimes needs to be converted into digital signals, which may lead to some quality loss of the vide source. On the other hand, it’s possible to reach a relatively high range of video resolutions because of using higher frequencies.

DVI, one of VGA’s successors, is appearing more and more on computers and displays, especially on higher-end graphics card and high-resolution computer displays. However, DVI is not considered as the mainstream connector as VGA.

In addition, there are several types of DVI connectors which can handle uncompressed digital video. Therefore, the video quality does not depend on the cable very much. It is safe to say that the video quality can be better in this way.

If DVI is considered to be the successor to VGA, then HDMI is possible the successor to DVI. HDMI enjoys a better fame in comparison with DVI possibly as it often appears on high definition televisions. In effect, HDMI is compatible with newer televisions, which makes it more and more popular in the electronics market. Its compact size is more and more often showing up in computers and computer displays as well.

Monday, February 16, 2015

VGA vs. HDMI

A variety of cables have been adopted by people in daily life and work in order to transmit sound and video quality to one another in computers and televisions. VGA and HDMI are two of those cables and video standards. They are used to hook computer up to the television, which can help people enjoy the video on the television after transmitted from their computers. Therefore, it is safe to say that they are of great use in people’s daily life. If you want to put a movie onto the TV or if you are in need of showing a PPT or slide to a group of friends or family or for a business meeting, then they will help you in many ways.

As a matter of fact, VGA and HDMI cables have two different types of video standards. The VGA is an analog standard which can be used with things like S-Video, radio frequency, D-Terminal and even component video, etc. on the other hand, HDMI is able to connect digitally with audio and video sources like Blu-Ray disc players, DVD players, HD DVD and even things like computers, video game consoles and camcorders.

VGA cables can only offer the video signal. Any other signals including audio can not be offered through VGA. On the other hand, HDMI can offer both video and audio signals. This difference has a lot of influence as you can decide whether you need other cables, connectors or even adapters under some circumstances. As a result, you are supposed to equip an adapter with audio cables when you want to watch movies on the television from your laptop with only a VGA cable. However, if you have an HDMI cable, then you have nothing to worry about when watching the movie on the television from your laptop.

Sunday, February 15, 2015

Difference between DVI and VGA

Generally speaking, VGA and DVI connectors are both powerful tools in terms of transmitting video from a source, fro example, a computer or tablet, to a display device, like a monitor, TV or projector. As a matter of fact, the main difference between VGA and DVI lies in how the video signal travels. In effect, VGA connectors and cables are able to carry analog signals while DVI can carry both analog and digital signals. In comparison, DVI is a newer technology which is capable of offering people better and sharper display. Fortunately, it is easy for people to distinguish DVI and VGA as they have obvious physical difference: VGA connectors and ports are blue while DVI connectors are white.

On the other hand, DVI can offer a higher quality signal than VGA. The higher the resolution is, the more noticeable the difference will be. After all, the quality of the video depends on the operation mechanism and the length and quality of the cable. 

However, the operation mechanism of VGA and DVI has no difference for users. Actually, they have no desire to understand the difference between them. For them, both connectors work in the same way: devices have female ports and the connectors have male endpoints.


In application, the operation of VGA and DVI has much difference which happens in the data transmission process. VGA connectors transmit analog signals. As a result, the digital video signal received from the source will be converted to analog signals in order to be transmitted via the cable. When the display device is an old CRT monitor, then it is able to accept the analog signal. However, the majority of display devices used today are digital. Therefore, the analog signal of the VGA connectors will be converted to digital signal in the end. In consequence, the video quality for VGA connectors will suffer some kind of degradation.

Saturday, February 14, 2015

Image Cropping

If you are starting with digital images or having trouble getting started, then there is some knowledge you need to master. To view digital images on the video screen, people are supposed to have a good understanding about how to deal with digital images. It is of great use to learn some basics about resizing digital images. In effect, it is like learning to drive. Once you have mastered the necessary knowledge, which is not difficult at all, you will benefit from it for the rest of your life. Therefore, here is the knowledge prepared for you in order to master the required basics. 

As a matter of fact, digital image size is dimensioned in Pixels. Pixels are what it is all about and digital is rather different from film. Resize is a term which seems a little vague and ambiguous. It does not have specific meaning until we tell what it means exactly. There are several ways to resize an image and each will lead to different results.

First of all, we are going to talk about how to crop an image. Image cropping refers to the way to simply cut away some at the edges and include less area in the final image. It is like the actions done by scissors on paper, so to speak, what’s different is that we will enlarge the image later. The trimmed pixels will be discarded after the image has been cropped, which makes the image dimensions smaller. As a result, it can change the scene included and the shape from time to time. Frankly speaking, a little cropping is helpful to improve the composition of many images as well as remove the uninteresting blank nothingness around the edges and concentrate the actual subject larger.

Friday, February 13, 2015

How to Scale an Image

Generally speaking, there are three main types of algorithms which can be made use of by people in the process of image scaling to increase the size of an image. First of all, the simplest method is to take each original pixel in the source image and then copy it to the corresponding position in the larger image. Some gaps between the pixels in the larger image may appear, but they can be filled by assigning to the empty pixels to the color of the source pixel to the left of the current location. As a matter of fact, such operations can multiply an image and its data into a larger area. This method, called nearest-neighbor, is very useful in terms of preventing data loss, but the image quality usually suffers some damage as the enlarged blocks of individual pixels will be clearly visible.

There are also some other image scaling algorithms called bilinear interpolation and bicubic interpolation. They work by filling in the empty spaces in an enlarged image with pixels whose color is determined by the color of the given pixels surrounding it. Under such conditions, the scaled image will be smoother than the scaled image using the nearest-neighbor method, but the image may suffer from some other problems, including becoming blurry and full of indistinct blocks of color.

The third type of image scaling algorithm makes use of pattern recognition to identify the different areas of an image being enlarged. Next, it tries to structure the missing pixels. This method will bring people a lot of benefits. However, the more times the image is scaled by this algorithm, the more visual artifacts will appear. In addition, this method requires more money than other types of scaling when scaling full-color photographic images, which makes it more computationally expensive under some circumstances.

Thursday, February 12, 2015

Image scaling

Scaling is also known as Resize. Sometimes, Resample is even called scaling, which is not entirely unreasonable. Image scaling refers to the computer graphics process which increase or decrease the size of a digital image. As a matter of fact, an image can be easily scaled by an image viewer or editing software. In addition, an image can also be scaled automatically by a program, which can greatly help the image to fit into an area of different size without efforts. People can make use of a lot of methods in order to reduce an image. However, the most popular way adopted by people is a type of sampling called undersampling, which can help to maintain the original quality. It is more complicated to enlarge an image as there is larger area to be filled with more pixels. 


Generally speaking, scaling is a non-trivial process which involves a trade-off between efficiency, smoothness and sharpness. With bitmap graphics, the pixels forming the image will become more and more visible when the size of the image is being reduced or enlarged, which can make the image seem “soft” if the pixels are averaged or jagged. 

In effect, scaling will not change the image pixels in any way. To be honest, scaling will only change the single number of dpi (ppi), a number simply stored separately in the image file in an arbitrary way. The only equipment makes use of it is the printer. What’s more, it only changes the size the image will print on paper. The images on the computer screen will not be influence by the number at all. Actually, the camera can never know how people want to print the image, and thus just makes up a number. As a result, people are supposed to fix the number before printing anything.

Wednesday, February 11, 2015

About Black Bars

Generally speaking, there are some common problems associated with aspect ratio. For example, the video seems stretched horizontally or vertically when you play it on your DVD player. Fortunately, such problem can be easily handled by tweaking hardware settings. Users just need to configure the DVD player or TV set to the correct aspect ratio in order to make the video as normal. 

In addition, there is one problem related with aspect ratio. People will encounter with it from time to time when playing video. How to remove the black bars from the edges of the video when you are playing the video? 

As a matter of fact, black bars often happen when a widescreen video (16:9) is converted to 4:3 if the user is making use of the Letter Box resize method. To be honest, the proportions of the image have nothing uncommon, but there are black bars existing on the top and bottom of the video. If you are watching video on a widescreen TV, then this problem will lead you to feel a little bit uncomfortable and distract you to a large extent. The TV adds its own bars to the sides of the video to display it on the screen. 

On the other hand, black bars will appear on the left and right sides of the video image when a 4:3 video is converted to 16:9 using Letter Box resize method. In this case, users can make use of the Crop resize method in order to get rid of the unwanted black bars and return the video to 4:3 aspect ratio eventually. Moreover, people can also remove the unwanted black bars by adopting some professional software designed to perform some very powerful functions. In the end, the video will be displayed normally and people can enjoy watching it without being distracted by the black bars.

Tuesday, February 10, 2015

Hardware Compression vs. Software Compression

Software compression is more well-known among people when compared with hardware compression. The reason is that the majority of people do not have need to apply hardware compression in their daily life. In contrast, software compression does meet their requirements in many aspects. Generally speaking, software compression is cheaper and more easily accessible solution compared with hardware compression. On the other hand, hardware compression demands specialized equipment which is designed to deal with specific workload. Even though hardware compression costs users more, it does have its own advantages over software compression. First of all, the specialized hardware enables hardware compression to be faster than software compression to a great degree. Software compression just requires a general purpose processor in order to perform its job. Secondly, hardware compression will not cause any extra burden to the host processor because its calculations take place within its own hardware. Software compression cannot make it. Under most circumstances, software compression is likely to degrade the performance of the host during heavy use and other operations. If you are compressing a large amount of data while using your computer at the same time, then this may pose a great threat.

There is no doubt that software compression has advantages as well. First, it costs less. Second, software compression offers users a lot of options to choose from. Users are allowed to control the process of how the data is archived, compressed and formatted. In comparison, users are given very few, or no options at all with hardware compression. Users have no say in the process of how the data is compressed before being stored into the media. Everything has been pre-programmed into the hardware by the manufacturer.

In conclusion, software compression is better if you are going to store compressed data for a long time. Hardware compression is usually device specific, which may cause great problems when your device fails without anything can replace it.

Monday, February 9, 2015

Hardware Compression

Generally speaking, the hardware compression is performed on the data path level. As a matter of fact, the hardware compression is only available for the data path which directs data to tape libraries. Under this circumstance, the uncompressed data will be sent from the client computer to the media through the data path. Therefore, the data will be compressed by the tape drive hardware before being written to the media.

There is no doubt that hardware compression is faster than software compression most of the time. The reason is that hardware compression is operated by dedicated circuitry. As a result, the hardware compression is ideal for direct-connect configurations in particular where the subclient and MediaAgent are hosted by the same physical computer. In such configurations, the drives are able to compress the data at the same rate as it is sent by the subclient as there are no network bottlenecks throttling the data transmission to the media drives. On the other hand, the hardware compression can boost the virtual capacity of the tape but also the performance of the data protection thanks to the tape storing more data per unit with higher operation speed. 

However, the problem is that hardware compression is not supported by disk library. It is only applicable for tape libraries.


If the data secured by data protection operations must compete with other data for network bandwidth, then hardware compression may be not that useful. When the network is congested, the tape drives will be starved for data for the data cannot be supplied quickly enough. Under such circumstance, the drives can compress as well, but the drives are likely to stop and restart the media in order to wait for more available data. Therefore, the compression performance may not be very ideal, which may lead to some potential problems.

Sunday, February 8, 2015

Software Compression


Client Compression

The client compression is specified on the subclient level for most agents. Generally speaking, it is available for all storage media. If people adopt this way, then the data on the client computer will be compressed using the compression software. Next, the compressed data will be sent to the MediaAgent that directs it to the storage media in turn. When the client and MediaAgent reside on separate computers and the client has no choice but to send the data through a network, then client compression will be of great use and convenience for it can reduce the network load to a large extent.

Replication Compression

Replicated data can be compressed between the source and the destination computer. If the compression is enabled, then the data will be compressed on the source computer, replicated across the network to the destination computer and uncompressed on the destination computer, which can reduce the workload of the network to a rather large degree. As a matter fact, replication compression is specified on the Replication Set level and applies to all of its Replication Pairs. Therefore, people have the capability to enable or disable client compression between the source and destination machines for a given Replication.

MediaAgent Compression

The MediaAgent compression is specified on the subclient level for most clients. If the data path does not have hardware compression enabled, then people have the capability to enable or disable MediaAgent compression for a given subclient or instance as appropriate. 

Actually, MediaAgent compression is available for all storage media. The data will eb compressed on the MediaAgent using compression software in the MediaAgent. Afterwards, the compressed data will be sent from the MediaAgent to the storage media. When the MediaAgent software resides on a computer more powerful than the client computer, then the MediaAgent Compression can be of great use and convenience.

Saturday, February 7, 2015

Data Compression

Data compression options are provided for data which is secured by data protection operations. Generally speaking, compression is a very useful way to reduce the quantity of data sent to storage, which will double the effective capacity of the media in return (depending on the nature of the data). Moreover, the system will automatically decompress the data and restore it to the original state when the data is later restored or recovered.

The following data compression options are provided: software compression and hardware compression. Software compression offers users to compress data in the Client and MediaAgent while hardware compression for libraries with tape media at the individual data path. In addition, as compressed data often increases in size if it is again subjected to compression, the system will apply one type of compression for a given data protection operation. Therefore, users are able to redefine the compression type at any time without damaging the ability to restore/recover the data.

If the hardware compression is available and applied, then it will have some kind of priority over the other compression selections. Whenever hardware compression has been enabled for a data path, all data conducted through the data path will be compressed in the way of hardware compression automatically. Otherwise, the data will be dealt with in accordance with the software compression selection of each subclient which backs up to the data path. Under such circumstances, people are able to choose from the following options: Client compression, MediaAgent compression, or no compression.


Last but not least, bear in mind that hardware compression is not applicable for disk libraries. As a result, the software compression selection for subclient is adopted by people for data paths which are associated with disk libraries. It is advisable to have a good understanding of such knowledge before compressing data.

Friday, February 6, 2015

Different Effects for Different Parts of an Image

It seems that brightness, contrast, saturation and sharpness are the four simplest controls of image. On the surface, they are four mutually exclusive controls. However, as a matter of fact, they are related to each other and intertwined in a way to such a degree that the change of any one of them will lead to rather complicated effects in the image in terms of the other three controls. Only when users have mastered a rather good knowledge about how these four controls are related and how to make use of them in a harmonious way can users achieve the desired image effects in the end. It is wise for users to think twice about what they really want to accomplish before taking actions to change the brightness, contrast, saturation and sharpness, whether to increase or to reduce them.


Generally speaking, the overall effect of brightness, contrast, saturation and sharpness varies according to different contents in the photo. Take the increase of contrast as an example. With the increase of contrast, the shadow will be darker while the highlight will be brighter. However, if most details in the photo are very bright, for instance, an overexposed sunset, then we will end up with less contrast with the increase of its contrast. The reason is that there is no shadow in the photo at all, which means that the separation of shadows and highlights in an image containing only highlights will just compress the highlights. As a result, the image will get less contrasty. We can safely come to the conclusion that it is vital to have a good understanding about how these four simplest controls affect each other and how they work in a harmonious way. It is a bit of art to use brightness, contrast, saturation and sharpness to achieve a balance.

Thursday, February 5, 2015

Sharpness

Generally speaking, brightness, contrast, saturation and sharpness are thought to be the four simplest controls as they have been developed as long as the color TV was invented in the first place. However, people often turn blind eyes to the fact that all these four controls are related to each other. As a matter of fact, changing any one of these four controls will influence and change the other three. 

People may tend to define sharpness as edge contrast, in other words, the contrast along edges in a photo. It is reasonable for people to have such definition. The fact is that an increase of sharpness will lead to the increase of the contrast between only along/near edges in the photo while the smooth areas of the image will not be influenced at all.


When you have made use of the tool of unsharp mask, you will only change the sharpness of the edges. In effect, different parts of the same image will show different change of their sharpness. As a matter of fact, where the edge is thicker, the sharpness will be increased while the contrast and brightness will not have obvious change at all. If the edge is rather thin, then the contrast, brightness and saturation will be greatly increased with the increase of the sharpness. Therefore, it is safe to conclude that increasing sharpness can cause the appearance of increase saturation, contrast, and brightness in areas of the image which contain fine detail where other areas (areas with broader detail) seem less affected except for the added sharpness. In all, changing one of these four controls really affects the other three most of the time. Keep in mind that you need to strike a balance between them; otherwise, the image will make people feel uncomfortable.

Wednesday, February 4, 2015

Saturation


Saturation has a lot of similarity with contrast. However, saturation will increase the separation between colors instead of increasing the separation between the shadow and highlight. As a matter of fact, if the specific part of an image has its own saturation and some parts have no saturation at all, then an increase in saturation will lead to the increase in contrast, brightness and sharpness of the part which has its own saturation as well. On the other hand, the contrast, brightness and sharpness of part having no saturation will not change at all. Similarly, a change in saturation will have a more noticeable effect on the vibrant color and less on dull color or colors that are almost neutral. The reason for such phenomenon is that color saturation can be changed only when there is color saturation to work with in the first place.


In all, changing one of these four controls really affects the other three most of the time. Keep in mind that you need to strike a balance between them; otherwise, the image will make people feel uncomfortable.

Tuesday, February 3, 2015

Contrast

Generally speaking, brightness, contrast, saturation and sharpness are thought to be the four simplest controls as they have been developed as long as the color TV was invented in the first place. However, people often turn blind eyes to the fact that all these four controls are related to each other. As a matter of fact, changing any one of these four controls will influence and change the other three.

An image must have the proper contrast for easy viewing. Contrast refers to the difference in brightness between objects or regions. For example, a white rabbit running across a snowy field has poor contrast, while a black dog against the same white background has good contrast. Contrast is specifically defined as the separation between dark and bright, which means that shadows can be darker and highlights can be brighter by changing the contrast. On the other hand, you can decrease the contrast in order to bring the shadows up and the highlights down, which can make them closer to each other. As a matter of fact, adding contrast usually makes the image more popular because it seems more vibrant. In comparison, the image will look duller and boring if the contrast has been decreased to a great degree.


When you have added contrast to an image, then you will notice that some part will become darker while some parts become brighter. This can make the image look more defined. Meanwhile, you have also increased the brightness of some parts. Besides, the saturation of the brighter and darker parts will be increased, which also increases the sharpness. Therefore, it is rather obvious that the four controls do affect each other. Changing of one will also lead to the other three to have corresponding changes. Keep in mind that you need to strike a balance; otherwise, the image will be uncomfortable.

Monday, February 2, 2015

Brightness

Generally speaking, brightness, contrast, saturation and sharpness are thought to be the four simplest controls as they have been developed as long as the color TV was invented in the first place. However, people often turn blind eyes to the fact that all these four controls are related to each other. As a matter of fact, changing any one of these four controls will influence and change the other three. Do you have any knowledge about how they are related and how to change the balance of brightness, contrast, saturation and sharpness by only changing one of these three parameters? Here is some knowledge you might have desire to know.

An image must have the proper brightness and contrast for easy viewing. Brightness refers to the overall lightness or darkness of the image. Most people tend to think that brightness is the simplest control of image in concept. They believe that changing of the bright just means the image will be brighter or darker. In order to remove such misunderstanding, we need to distinguish brightness from “gamma” in the first place. Actually, increasing gamma by moving a mid-tone slider on a histogram is not the same as increasing brightness. We can’t but admit that increasing gamma/mid tones really makes an image look brighter, but it is non-linear because it only increases the brightness of the shadows and mid-tones instead of influencing the highlights in the image. On the other hand, traditional brightness equally brightens the entire image from the shadows to the highlight in a simple way.


As a matter of fact, if we add too much brightness, then the shadows will catch up to the highlights as they are already as bright as they can get. As a result, the contrast, saturation and sharpness of the image will be reduced accordingly. Therefore, changing one of these four controls really affects the other three. Keep in mind that you need to strike a balance; otherwise, the image will be uncomfortable.

Sunday, February 1, 2015

The Difference between Interlaced and Progressive Video (Ⅱ)

In comparison, progressive video is made up of consecutively displayed video frames. All the horizontal lines consisting of the image being shown are contained. Therefore, progressive video has some advantages over interlaced video. For example, images will be smoother, fast-motion sequences are sharper and artifacts are much less prevalent.

Nonetheless, progressive video has some drawbacks as well. People have found that progressive video demands higher bandwidth, which may lead to some problems if this requirement cannot be satisfied. With the development of science and technology, such problems cannot pose threat to the application of progressive video for a lot of more advanced technological products keep emerging. Nowadays, television systems and packaged media such as DVD are moving away from analog transmission and storage. Instead, the digital variants are becoming more and more popular. Under such circumstances, people have the opportunity to apply more efficient video compression most of the time. As a result, with the same amount of bandwidth, higher resolution images can be achieved than possible via interlaced analog video.

As a matter of fact, interlaced video will also be applied for a long time even though progressive video is more convenient and popular. The reason is that broadcasts in the US and some other countries still make use of 1080-line interlaced HD format. Nonetheless, both displays and packaged media are moving toward progressive video formats as the exclusive video at the same time, such as 720- and 1080-line progressive formats.


Actually, all digital, non-CRT displays can only display progressive video. As a result, any interlaced video cannot be displayed on them before being de-interlaced or converted. Interlaced video signals have to be deinterlaced or converted to progressive video. With time going on, it is possible that progressive video will totally take the place of interlaced video in every type of device since progressive video is more convenient and beneficial to people.