![]() |
|
|
#23 | |
|
Romulan Interpreter
"name field"
Jun 2011
Thailand
283316 Posts |
Quote:
![]() The connections can make a difference from many other point of view, like how much power they need, how thick/thin/flexible/etc (and easy to handle and move around) is the monitor cable, how "safe" is for the people who uses it (high voltage cables or those issuing dangerous frequencies are usually frown about) how protected it is against influences from outside and eavesdropping (imagine something like this, but which is targeting the signals in the cable, and not the light reflected by your face, I have a link somewhere, but can't find it now, I remember some guys being able to read what's on your monitor from 250 meters away, only interpreting the signals radiated by your VGA cable, which is nothing more then an antenna), etc. As for reference you can check any datasheet of any LCD which exposes a pure digital (TTL, CMOS, CPU, DE) interface (the direct access to the the LCD drivers, and not hidden behind a LVDS, DP, or other serializing interface). For example, go to this page. I just have this on my desk, which is used in many pda, ipad-like toys, you can see in page 5 that only 18 data lines are used for colors. You CPU/GPU might send 32 bits for each pixel, but the LCD will only need 18 of them. Some LCD controllers overcome this things by doing TMED (temporal modulated energy distribution), it takes the advantage of the fact that LCDs are highly remanent, i.e. when you turn them off, they don't turn off instantly, the color still persists for few milliseconds, fading away slowly. So, if the controller is clever, it can make many colors by calculating the "energy" (kelvin) of the color you want to display (all 32-bits LCD controllers do this, because THERE ARE NO LCD DRIVER with 32 lines of color, the most professional of them have 8 bits RGB, that is 24 data lines totally, these data lines may not be accessible from outside, if they expose a different interface, but the corespondent bits are there, inside, well hidden, in internal latches and registers, hehe), and turning on/off the pixel in time in such a way that the "energy" radiated by the pixel stays around the energy that your color requires. Like "dimming" the pixels, by turning it on and off fast, in fact, from the 60 frames per second of your monitor screen, in few of them the pixel is off, to display an "intermediary shadow" (like making Green-126.3, in between Green-126 and Green-127). For more details you can see the TMED algorithms of PXA320 here. Other professional controllers use color palette optimization, or distributing the energy not in time, but in "space", procedure which some of you may know it as "dithering", moving the pixels around, with the goal of getting the same "energy" radiation from an "area" (of more nearby pixels) as it would be radiated by the same area if the color was the real, requested one. To transmit this information from your computer to the card, and even to your monitor, they all use serialization now, i.e. transmitting the bits one by one on a single communication channel, instead of transmitting all in the same time, using a lot of channels. This allows reducing of the number of pins, and the number of wires (lowering the production costs for ICs, having devices easier to handle, for example thinner and more flexible monitor cables), and it is possible because the data can be send with a frequency higher than they are needed, so there is enough time to "serialize" them, sned them and "de-serialize" them at the other end of the chain, before using them. Like a parenthesis, some older guys here may remember the old SCSI-like parallel cables, with 60, 80 or even 180 wires, used for high speed parallel interfaces. Years ago when everybody went crazy to switch to "serial" (like "serial ata", "lvds displays", "usb printers" instead of the 25-pin parallel matrix stuff, etc) people could not understand how a serial interface, where you send a bit at a time, can be faster than a parallel interface, when you send 8, 16, etc, even few hundred bits at a time. Even some of my colleagues, which WERE educated in domain, scratched their heads about it. The reality is that, at higher frequencies, strange things starts to happen between parallel wires. One can't easy run a hundred meters of IDE cable around a production hall, not talking about proper termination of such cables to equalize the impedance of the wires, or about the material costs for such a stupid idea. Same inside the box, the connections are shorter, but the frequencies are higher. At the time when the people loved parallel interfaces, these things were all well known, but the technology to produce enough fast integrated circuits to allow serialization was not well developed. Practically, if you have a 8-bit parallel interface (this is 8 wires of data, plus 3 of controls, to signal when you want to read or write, when you want to turn the interface on or off, etc) and you want to serialize it, then you will need some hardware able to sent the bits one by one, at least 11 times faster, to get the same speed at the end. The result is that you can now use only two wires instead of 11, and your IC can have only 3 pins instead of 12. Consume less power, use less material for cables, the cables are cheaper and easier to handle, and - possible - functionally better. That is what LVDS (and the newer HDMI or DP standard) is doing. In theory, a serial interface is not, and can not, be faster than a parallel one. If you could insulate the parallel wires from interferences of each other, and produce them for free, and make them infinitely thin, so the size of 1000 wires put together won't matter, etc... But at these scores, strange things start to happen there. Do you want larger and faster memories? There was never a problem to integrate a lot of memory cells on a silicon chip. The real problem is that for every new row and new column of memory cells that you put in, you need to add pins for addresses. It make no sense to have more cells of memory if you can't access them. The memory cells you can make small, but the pins you can't, otherwise you can't solder them. The best COB machine can solder micron wires to 20x20 microns pads, but for 1GB of memory, assuming you make a "block" of 1000 planes, of 1000 rows, of 1000 columns each, you would still need 3000 pins of that IC. Well, it would be a very fast memory, but also very big... If you place the pins around, like for a DIP/QFP package, the memory will be a small cubical millimeter in the middle, but the package would be about an A4 paper size, to have space around for all pins. Even a BGA package with 0.5 mm pitch and 10 layers around will need to be a square with lateral of about 10cm. And don't forget that the CPU which will interface it would need 3000 pins for it too. That is why they are "time-multiplexed". You use only 1000 pins to access the "cube of memory", but you tell the coordinates (x, y, z) of the cell in the cube in 3 consecutive moments of time. Now you need 3 times longer to access it, but the box is much smaller. Well, 1000 pins is still a lot, so let's cut out all the pins and put 1000 cells more inside, which will hold the "status" of this fictive pins. And we use two pins and serial communication to load those 1000 cells on the same way as we would access the 1000 pins. Now our memory is called "serial memory", it is tinny, it can be made arbitrarily huge, it only has two pins to connect to it, then add the capacity to retain data after the power is cut off, and then you have the best example of most marvelous and cheapest memory. But even if we increase the serial clock by a 20-fold, the new memory is still 3000/20=150 times slower. My lunch break is over, sorry
Last fiddled with by LaurV on 2013-03-14 at 07:25 Reason: many typos (and still there are!) |
|
|
|
|
|
|
#24 |
|
May 2011
Orange Park, FL
92810 Posts |
Thanks for the memories — this brought to mind when the UNISYS engineers took out the word channel disk interfaces (with their huge multi-pin connectors) and replaced them with serial interfaces in the mainframe twenty years ago.
It seemed to me at the time that the parallel interface HAD to be faster... |
|
|
|
|
|
#25 |
|
Aug 2002
2·32·13·37 Posts |
That was an awesome
Seriously, we learned a lot!
|
|
|
|
|
|
#26 | |
|
Romulan Interpreter
"name field"
Jun 2011
Thailand
1029110 Posts |
Quote:
|
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Best way to test CPU and RAM stability? | Caribou007 | Software | 3 | 2013-07-12 05:50 |
| Remote Stability Test? | spaz | Software | 3 | 2009-12-10 15:16 |
| Whats the best stress test settings for RAM stability? | xtreme2k | Hardware | 6 | 2007-03-28 20:38 |
| prime95, torture test, and stability | polyestr | Hardware | 6 | 2006-08-12 12:45 |
| Stability Test for Cool'n'Quiet and Power-Supply | Mark.S | Hardware | 5 | 2004-05-12 10:16 |