Types and characteristics of RAM. SDRAM memory: what do we have? Ddr sdram maximum volume

Not long ago, the mysterious words SDRAM and BEDO RAM appeared in various computer tables. It's easy to guess what they refer to random access memory(RAM). The appearance of something new in this area is interesting in itself, but looking at the access time value shown next to it (10ns instead of 50 - 70 ns for EDO), you wonder: what kind of monster crawled out of the underground laboratories of chip developers. Long 168-pin "memory connectors on new motherboards are causing even greater confusion and hesitation among users. I'll try to bring some clarity to this issue.

So, behind the abbreviation SDRAM lies synchronous dynamic RAM. From the name one can already draw some conclusions about the principle of its operation. Firstly, this memory is dynamic, that is, it requires periodic data updating and is generally based on the same technology as modern EDO and FPM memory (see "KV" No. 38). Secondly, this memory is synchronous, which means it is synchronized by some kind of signal. The most logical candidate for this role is the bus clock frequency; it is at this frequency that synchronization occurs. A legitimate question arises: why is this necessary?

The fact is that at a bus frequency of 66 MHz, access to the same EDO memory occurs as follows: the first byte is read in 5 clock cycles (minimum), subsequent bytes in the same line are read in 2 clock cycles (also minimum). Thus, the time to read four bytes can be represented as follows: 5-2-2-2 (to fill the cache line, you need to read all four bytes). Obviously, the processor has to be idle for some time. If you synchronize the reading process with the bus frequency, you get 5-1-1-1, which means there will be less downtime. This is the main idea behind how synchronous memory works. Let's take a closer look at this process.

First, as always, the processor sets the required address value to the address bus. Then it is decomposed into a row address and a column address, which are alternately fed to the matrix of storage elements. Row and column addresses are supplied to the same chip pins, so they cannot be transmitted simultaneously (for the same reason, accessing the first byte takes longer). Then the differences begin: in conventional memory, addresses and control signals were read asynchronously; accordingly, for reliable reading, the signal had to have a certain duration. In SDRAM, all signals are read on the positive edge of the clock pulse, therefore the problem of their timing coordination is much easier to solve.

Further, an external signal had to be applied to conventional memory when each byte was read (which means a read operation could take several clock cycles). The SDRAM chip has a special register (Mode Register), which is used to set the read (or write) mode. After accessing the first byte, signals for reading subsequent values ​​are generated by the microcircuit itself every clock cycle. In this way, you can program to read one, two, four, eight bytes, or read the entire matrix row. In this burst mode of operation, a delay of several clock cycles will occur only between the memory access and the first byte of data, while subsequent bytes will arrive on the data bus every clock cycle (see Fig. 1).

Fig.1

Another feature of SDRAM is that one module (DIMM) can contain several (two or four) memory banks. This allows you to simultaneously keep several active rows and access them one by one. As a result, you can get a continuous flow of data, since during preparation for operation of one bank you can read data from another (this method is called interleaving).

If we retell all this in simple words, it turns out that this memory works synchronously with the bus, and the bus frequency does not have to be equal to 66 MHz. In fact, the 10ns access time given as an SDRAM parameter represents the minimum time between read cycles of consecutive bytes (that is, it characterizes the maximum bus frequency with which such memory can be synchronized). 10 nanoseconds corresponds to a bus frequency of 100 MHz. Considering the rapid growth of processor speeds, such a frequency will not be expected for very long (some motherboards already support clock frequencies of 75 and 83 MHz).

It is advisable to use SDRAM memory in multitasking, multi-user systems, since its performance is comparable to the second level cache (5-1-1-1 and 2-1-1-1 clock cycles for a 66 MHz bus, respectively).

SDRAM is available in the form of 168 pin DIMM modules (dual in line memory module). Please note that if system board There is such a memory connector, this does not mean that the board supports SDRAM - EDO is also available in the form of 168-pin DIMMs. SDRAM is definitely supported by Intel's VX and TX chipsets, I don't know about the others.

However, SDRAM did not immediately become a standard. Initial developments in the field of synchronous dynamic memory were produced by many companies, but since Intel stubbornly supported bus frequencies no higher than 66 MHz (83 MHz is still not usually documented on motherboards, and 75 MHz is officially used only by the Cyrix 200+ processor), work on memory designed for frequencies up to 100 MHz seemed unprofitable to them. Therefore, the ideas inherent in SDRAM (internal address generation and burst mode of operation) were used in conventional EDO memory.

The result is BEDO DRAM (Burst EDO DRAM). It uses an internal counter to automatically generate addresses for reading subsequent bytes. Reading is carried out in fixed size packets (4 bytes). BEDO DRAM is optimized to operate at 66 MHz bus frequency and achieves 5-1-1-1 read times. The corresponding timing diagram is shown in Figure 2.

The most common type of RAM today is SDRAM. If translated literally, this means synchronous dynamic memory with random access.

Without going into technical details, the difference of this type RAM is that when a signal arrives at the RAM, a response from it does not come immediately, but only if a response signal is received. Another feature can be highlighted parallel processing commands (the next command begins to be processed without waiting for the completion of the previous one).

Start of sales

Actually, the SDRAM era dates back to 1993, when SDRAM began to be mass produced. In those days, another type of RAM called VRAM was used, but it was quite expensive for the average user. The produced RAM was called SDR SDRAM, and was suitable for the form factor (in simple terms: connector on the motherboard) of DRAM memory modules.

64 megabyte modules were widely produced, with clock frequency 66 - 133 MHz. They are still used in some places, but this is a rarity.

DDR SDRAM

But progress did not stand still, and after some time it appears new standard RAM called DDR SDRAM. In which, due to technical tricks, it was possible to double the operating speed while maintaining the frequency.

Among the innovations, a synchronizing signal between modules was also introduced (when using more than one module). If several modules are used, one of them will be located further than the other from the memory controller. Accordingly, signals from the RAM modules will reach it with different time delays (of course, for a person this difference will seem insignificant, but for a computer it is significant). The synchronizing signal eliminated this nuance.

DDR RAM was produced with clock frequencies up to 350 MHz. To power the module, a voltage of 2.6 V was required. In terms of memory capacity, modules of 256 and 512 MB were produced.

At the moment, DDR SDRAM is used in few places.

DDR 2 RAM

In 2003, DDR2 (full name DDR2 SDRAM) appeared. The main advantage over its predecessor is the increased bus clock frequency. Improved design for better cooling electronic components module. But besides the advantages, there is also a disadvantage: the final delays when processing commands are higher than for DDR.

DDR2 modules introduced a new (at that time) technology for using the so-called. ECC memory. One microchip per RAM is allocated for automatic recognition and correction of spontaneously occurring memory errors (arising, for example, from electromagnetic interference generated by the computer itself, or when exposed to cosmic radiation).

DDR2 was released at clock frequencies up to 600 MHz. To power the module, 1.8 V voltage was required, and the power consumption was 247 mW. In terms of memory capacity, modules of 512 - 4096 MB were produced.

In terms of breadth of use, this is the most common type of RAM in the CIS. Although on the planet as a whole, DDR2 is being widely replaced by newer ones.

DDR 3 RAM

In 2010, a new type of RAM, DDR3, was released. Even higher operating frequency, even larger memory chip capacity.

DDR3 was produced at bus frequencies up to 1200 MHz. To power the module, a voltage of only 1.5 V is required. In terms of maximum memory capacity, modules began to be produced with a previously unheard of 16 GB of RAM.

Most computers sold nowadays use DDR3 RAM.

DDR 4

In 2014, a new type of RAM, DDR4, was released. Created as an improved version of DDR3. The operating frequency of some samples reaches 3333 MHz. Memory capacities from 4 to 128 GB.

It is still very rare to see DDR4 anywhere in the CIS. But as practice shows, it is just a matter of time.

Used in computing as RAM and video memory. It replaced SDRAM type memory.

When using DDR SDRAM, twice the operating speed is achieved than in SDRAM, due to reading commands and data not only on the edge, as in SDRAM, but also on the fall of the clock signal. This doubles the data transfer rate without increasing the memory bus clock frequency. Thus, when DDR operates at 100 MHz, we will get an effective frequency of 200 MHz (when compared with the SDR SDRAM analogue). The JEDEC specification makes a note that it is incorrect to use the term “MHz” in DDR; the correct rate is “millions of transfers per second per data pin.”

The specific operating mode of memory modules is dual-channel mode.

Description

DDR SDRAM memory chips were produced in TSOP packages and (later mastered) BGA (FBGA) packages, produced according to the standards of 0.13 and 0.09-micron technical process:

  • IC supply voltage: 2.6 V ± 0.1 V.
  • Power consumption: 527 mW.
  • I/O interface: SSTL_2.

The memory bus width is 64 bits, that is, 8 bytes are simultaneously transferred along the bus in one clock cycle. As a result, we obtain the following formula for calculation maximum speed transfers for a given memory type: ( memory bus clock speed)x 2 (data transfer twice per clock cycle) x 8 (number of bytes transmitted per clock cycle). For example, to ensure data transfer twice per clock cycle, a special “2n Prefetch” architecture is used. The internal data bus is twice as wide as the external one. When transmitting data, the first half of the data bus is transmitted first on the rising edge of the clock signal, and then the second half of the data bus on the falling edge.

In addition to double data transfer, DDR SDRAM has several other fundamental differences from simple SDRAM. Basically, they are technological. For example, a QDS signal was added, which is located on printed circuit board along with data lines. It is used for synchronization during data transfer. If two memory modules are used, then the data from them arrives at the memory controller with a slight difference due to different distances. A problem arises in choosing a clock signal for reading them, and the use of QDS successfully solves this.

JEDEC sets standards for DDR SDRAM speeds, divided into two parts: the first for memory chips, and the second for memory modules, which, in fact, house the memory chips.

Memory chips

Each DDR SDRAM module contains several identical DDR SDRAM chips. For modules without error correction (ECC) their number is a multiple of 4, for modules with ECC the formula is 4+1.

Memory chip specification

  • DDR200: DDR SDRAM type memory operating at 100 MHz
  • DDR266: DDR SDRAM type memory operating at 133 MHz
  • DDR333: DDR SDRAM type memory operating at 166 MHz
  • DDR400: DDR SDRAM type memory operating at 200 MHz

Chip characteristics

  • Chip capacity ( DRAM density). Recorded in megabits, for example, 256 Mbit - a chip with a capacity of 32 megabytes.
  • Organization ( DRAM organization). It is written as 64M x 4, where 64M is the number of elementary storage cells (64 million), and x4 (pronounced “by four”) is the bit capacity of the chip, that is, the bit capacity of each cell. DDR chips come in x4 and x8, the latter are cheaper per megabyte of capacity, but do not allow the use of functions Chipkill, Memory scrubbing and Intel Single-device data correction.

Memory modules

DDR SDRAM modules are made in the DIMM form factor. Each module contains several identical memory chips and a configuration chip Serial presence detect. Registered memory modules also contain register chips that buffer and amplify the signal on the bus; non-registered memory modules do not have them.

Module characteristics

  • Volume. Specified in megabytes or gigabytes.
  • Number of chips ( # of DRAM Devices). A multiple of 8 for modules without ECC, a multiple of 9 for modules with ECC. Chips can be located on one or both sides of the module. The maximum number that fits on a DIMM is 36 (9x4).
  • Number of rows (ranks) ( # of DRAM rows (ranks)).

The chips, as can be seen from their characteristics, have a 4- or 8-bit data bus. To provide higher bandwidth (e.g. DIMM requires 64 bits and 72 bits for ECC memory), chips are linked into ranks. The memory rank has a common address bus and complementary data lines. One module can accommodate several ranks. But if you need more memory, then you can add ranks further by installing several modules on one board and using the same principle: all ranks sit on the same bus, only Chip select different - everyone has their own. A large number of ranks electrically loads the bus, or more precisely the controller and memory chips, and slows down their operation. Hence, they began to use multi-channel architecture, which also allows independent access to several modules.

  • Delays (timings): CAS Latency (CL), Clock Cycle Time (tCK), Row Cycle Time (tRC), Refresh Row Cycle Time (tRFC), Row Active Time (tRAS).

The characteristics of the modules and the chips they consist of are related.

The volume of the module is equal to the product of the volume of one chip and the number of chips. When using ECC, this number is further multiplied by a factor of 8/9, since there is one bit of error control redundancy per byte. Thus, the same memory module capacity can be filled with a large number (36) of small chips or a small number (9) of larger chips.

The total capacity of the module is equal to the product of the capacity of one chip by the number of chips and is equal to the product of the number of ranks by 64 (72) bits. Thus, increasing the number of chips or using x8 chips instead of x4 leads to an increase in the number of module ranks.

IN in this example Possible layouts of a 1 GB server memory module are compared. Of the presented options, you should prefer the first or third, since they use x4 chips that support advanced error correction and failure protection methods. If you need to use peer-to-peer memory, only the third option remains available, but depending on the current cost of 256 Mbit and 512 Mbit chips, it may turn out to be more expensive than the first.

Memory module specification

Module name Chip type Memory bus clock frequency, MHz Maximum theoretical throughput, MB/s
single channel mode dual channel mode
PC1600* DDR200 100 1600 3200
PC2100* DDR266 133 2133 4267
PC2400 DDR300 150 2400 4800
PC2700* DDR333 166 2667 5333
PC3200* DDR400 200 3200 6400
PC3500 DDR433 217 3467 6933
PC3700 DDR466 233 3733 7467
PC4000 DDR500 250 4000 8000
PC4200 DDR533 267 4267 8533
PC5600 DDR700 350 5600 11200

Note 1: Standards marked with an “*” are officially certified by JEDEC. The remaining types of memory are not certified by JEDEC, although they were produced by many memory manufacturers, and most of those produced in Lately motherboards supported these memory types.

Note 2: memory modules were produced that operated at higher frequencies (up to 350 MHz, DDR700), but these modules were not in great demand and were produced in small volumes; in addition, they had high price.

Module sizes are also standardized by JEDEC.

It should be noted that there is no difference in the architecture of DDR SDRAM with different frequencies, for example, between PC1600 (operating at 100 MHz) and PC2100 (operating at 133 MHz). The standard simply says at what guaranteed frequency this module operates.

DDR SDRAM memory modules can be distinguished from regular ones

Unlike other types of DRAM that used asynchronous data exchange, the response to the control signal received by the device is not returned immediately, but only when the next clock signal is received. Clock signals make it possible to organize the operation of SDRAM in the form of a finite state machine that executes incoming commands. In this case, incoming commands can arrive in a continuous stream, without waiting for the previous instructions to complete execution (pipelining): immediately after the write command, the next command can arrive, without waiting for the data to be written. The receipt of a read command will cause the data to appear at the output after a certain number of clock cycles - this time is called latency. SDRAM latency ) and is one of important characteristics this type of device.

Update cycles are performed on the entire row at once, unlike previous types of DRAM that updated data against an internal counter using the CAS-before-RAS update method.

Usage history

Mass production of SDRAM began in 1993. Initially, this type of memory was offered as an alternative to expensive video memory (VRAM), but SDRAM soon gained popularity and began to be used as RAM, gradually replacing other types of dynamic memory. Subsequent DDR technologies made SDRAM even more efficient. The development of DDR SDRAM was followed by the DDR2 SDRAM standard, and then the DDR3 SDRAM standard.

SDR SDRAM

The first SDRAM standard, with the advent of subsequent standards, became known as SDR (Single Data Rate - as opposed to Double Data Rate). One control command was received per clock cycle and one data word was transmitted. Typical clock speeds were 66, 100 and 133 MHz. SDRAM chips were available with data buses of various widths (typically 4, 8, or 16 bits), but typically these chips were part of a 168-pin DIMM module that could read or write 64 bits (without parity) or 72 bits (with parity check) in one clock cycle.

Using the data bus in SDRAM turned out to be complicated by a delay of 2 or 3 clock cycles between the read signal and the appearance of data on the data bus, while there should be no delay during write. It required the development of a rather complex controller that would not allow the data bus to be used for writing and reading at the same time.

Control signals

Commands that control the SDR SDRAM memory module are sent to module pins 7 signal lines. One of them sends a clock signal, the leading (rising) edges of which set the time points at which control commands are read from the remaining 6 command lines. The names (in parentheses are the names) of the six command lines and descriptions of the commands are given below:

  • CKE(clock enable) - when the signal level is low, the supply of a clock signal to the chip is blocked. Commands are not processed, the state of other command lines is ignored.
  • /CS(chip select) - when the signal level is high, all other control lines except CKE are ignored. Acts as a NOP (no operator) command.
  • DQM(data mask) - high level on this line prohibits reading/writing data. When a write command is issued at the same time, the data is not written to DRAM. The presence of this signal in the two clock cycles preceding the read cycle results in the data not being read from memory.
  • /RAS(row address strobe) - despite the name, this is not a strobe, but just one command bit. Together with /CAS and /WE, it encodes one of 8 commands.
  • /CAS(column address strobe) - despite the name, this is not a strobe, but just one command bit. Together with /RAS and /WE, it encodes one of 8 commands.
  • /WE(write enable) - Together with /RAS and /CAS, encodes one of 8 commands.

SDRAM devices are internally divided into 2 or 4 independent memory banks. The address inputs of the first and second memory bank (BA0 and BA1) determine which bank the current command is intended for.

The following commands are accepted:

/CS /RAS /CAS /WE B.A. n A10 A n Team
IN x x x x x x command delay (no operation)
N IN IN IN x x x no operation
N IN IN N x x x stop the current batch read or write operation.
N IN N IN Bank number N Column no. read data packet from active to this moment row.
N IN N IN Bank number IN Column no.
N IN N N Bank number N Column no. write a data packet to the currently active row.
N IN N N Bank number IN Column no. like the previous command, and upon completion of the command, regenerate and close this row.
N N IN IN Bank number Row No. open the row for write and read operations.
N N IN N Bank number N x deactivate current row selected bank.
N N IN N x IN x deactivate the current row of all banks.
N N N IN x x x regenerate one row of each bank using the internal counter. All banks must be deactivated.
N N N N 0 0 MODE from lines A0-A9 load configuration parameters into the chip.
The most important are CAS latency (2 or 3 clock cycles) and packet length (1, 2, 4 or 8 clock cycles)

Examples

Links


Wikimedia Foundation.

2010.

    SDRAM See what "SDRAM" is in other dictionaries:

    SDRAM- Saltar a navegación, búsqueda Memoria SDRAM. Synchronous dynamic random access memory (SDRAM) es la dynamic random access memory (DRAM) que tiene una interfaz sincrónico. Tradicionalmente, la memoria dinámica de acceso aleatorio (DRAM) tiene una… … Wikipedia Español

    SDRAM- refers to synchronous dynamic random access memory, a term that is used to describe dynamic random access memory that has a synchronous interface. Traditionally, dynamic random access memory (DRAM) has an asynchronous interface which means that... ... Wikipedia

    SDRAM- Modul SDRAM Speichermodule auf einer Hauptplatine SDRAM ist die Abkürzung für „Synchronous Dynamic Random Access Memory“, eine Art des … Deutsch Wikipedia

    SDRAM- , neuere, besonders schnell arbeitende Variante von DRAM Speicher Chips (DRAM) mit Zugriffzeiten von 7 12 ns. SDRAM Chips werden… … Universal-Lexikon - (Synchronous Dynamic Random Access Memory) Random Access Memory that can be adjusted and synchronized with the speed of the computer clock …

    SDRAM English contemporary dictionary

- Sigles d’une seule lettre Sigles de deux lettres Sigles de trois lettres Sigles de quatre lettres > Sigles de cinq lettres Sigles de six lettres Sigles de sept… … Wikipédia en Français

What is SDRAM?

Synchronous operation SDRAM, unlike standard and asynchronous DRAMs, has an input timer, so the system timer, which incrementally controls the microprocessor's activity, can also control SDRAM operation. This means that the memory controller knows the exact timer cycle on which the requested data will be processed. As a result, this frees the processor from having to wait between memory accesses.

General properties of SDRAM

  • Synchronized by clock cycles with the CPU
  • Based on standard DRAM, but significantly faster - up to 4 times
  • Specific properties:
    synchronous functioning,
    alternating cell banks,
    ability to work in batch-conveyor mode
  • The main contender for use as main memory in personal computers next generation

Cell banks are memory cells inside an SDRAM chip that are divided into two, independent cell banks. Since both banks can be active simultaneously, a continuous data flow can be achieved by simply switching between banks. This technique is called interleaving, and it reduces the overall number of memory access cycles and, as a result, increases data transfer speed. Burst acceleration is a fast data transfer technique that automatically generates a block of data (a series of sequential addresses) every time the processor requests one address. Based on the assumption that the next data address to be requested by the processor will be the next one relative to the previous requested address, which is usually true (this is the same prediction that is used in the cache algorithm). Batch mode can be used for both read operations (from memory) and write operations (to memory).

Now about the phrase that SDRAM is faster memory. Even though SDRAM is based on standard DRAM architecture, the combination of the above three characteristics allows for a faster and more efficient data transfer process. SDRAM can already transfer data at speeds up to 100MHz, which is almost four times faster than standard DRAM. This puts SDRAM on par with the more expensive SRAM (static RAM) used as external cache memory.

Why SDRAM?

Because the computer's RAM stores information that the CPU requires to function, the time it takes for data to pass between the CPU and memory is critical. A faster processor can only improve system performance if it doesn't get caught in a "hurry up and wait" loop while the rest of the system struggles to stay in that state. Unfortunately, since Intel introduced its x286 processor fifteen years ago, conventional memory chips have no longer been able to keep up with the enormously increased performance of processors.

Standard, asynchronous DRAM operates without timer input control, which was not required for data transfer until the second decade of microprocessor development. From this point on, systems with faster processors that use standard DRAM must force sleep states (time delays) to avoid memory overflow. A sleep state is when the microprocessor suspends execution of whatever it is doing until other components move on. into command receiving mode. For this reason, new memory technologies are being introduced not only to increase the exchange speed, but also to reduce the cycle of searching and retrieving data. In the face of emerging demands, memory chip manufacturers have introduced a series of innovations including page mode memory, static column memory, interleaved memory, and FPM DRAM (fast page mode). As processor speeds increased to 100MHz and higher, system designers offered small high-speed external cache SRAM (second-level cache), as well as new high-speed EDO (Extended Data Access) and BEDO (Batch Extensible Access) memory. FPM DRAM AND EDO DRAM are the most commonly used memory in modern PCs, but they are asynchronous electrical diagram not intended for speeds greater than 66MHz (maximum for BEDO). Unfortunately, this factor limits today's systems, based on Pentium-type processors with clock speeds of more than 133MHz, to a memory bus frequency of 66MHz.

The emergence of SDRAM.

Initially, SDRAM was proposed as a lower-cost alternative to expensive VRAM (Video RAM) used in graphics subsystems. However, it quickly found use in many applications and became the number one candidate for the role of main memory for the next generations of PCs.

How does SDRAM work?

SDRAM is based on standard DRAM and works the same as standard DRAM - accessing rows and columns of data cells. Only SDRAM combines its specific properties of cell bank synchronous operation and burst operation to effectively eliminate latency-wait conditions. When the processor needs to get data from RAM, it can get it at the right time. Thus, the actual processing time of the data did not directly change, in contrast to the increase in the efficiency of data sampling and transmission. To understand how SDRAM speeds up the process of fetching and retrieving data in memory, imagine that the central processing unit has a messenger who pushes a cart around the RAM building, and each time he needs to throw or pick up information. In a RAM building, the clerk responsible for forwarding/receiving information typically spends about 60ns to process the request. The messenger only knows how long it takes to process the request after it is received. But he doesn't know whether the clerk will be ready when he arrives, so he usually allows a little time in case of a mistake. He waits until the clerk is ready to receive the request. It then waits for the usual time required to process the request. And then, he pauses to check that the requested data is loaded into his cart before taking the data cart back central processor. Suppose, on the other hand, that every 10 nanoseconds the sending clerk in the RAM building must be outside and ready to receive another request or to respond to a request that was previously received. This makes the process more efficient since the messenger can arrive exactly at right time. Processing of the request begins the moment it is received. Information is sent to the CPU when it is ready.

What are the performance benefits?

Access time (commands to address to data fetch) is the same for all memory types, as can be seen from the table above, since their internal architecture is basically the same. A more telling parameter is cycle time, which measures how quickly two sequential accesses can be made on a chip. The first read cycle is the same for all four types of memory - 50ns, 60ns or 70ns. But the real differences can be seen by looking at how quickly the second, third, fourth, etc. are completed. reading cycle. To do this, we will look at the cycle time. For "-6" FPM DRAM (60ns), the second cycle can be completed in 35ns. Compare this to "-12" SDRAM (60ns access time), where the second read cycle takes 12ns. This is three times faster, and at the same time, without any significant reworking of the system!

The most significant performance improvements when using SDRAM are:

  • Faster and more efficient - nearly four times faster than standard DRAM
  • Potentially could replace the more expensive EDO/L2 cache combination that is now the standard
  • "When running synchronously" - eliminates time constraints and does not slow down the latest processors
  • Internal interleaving of dual bank operations promotes continuous data flow
  • Possibility of batch operation up to full page(using up to x16 chips)
  • Pipeline addressing allows the second data requested to be accessed before the first data requested is completed.

What is the place of SDRAM among future PC memory?

Currently, FPM DRAM and EDO DRAM make up the majority of mainstream PC memory, but SDRAM is expected to quickly become the primary alternative to standard DRAM. Upgrading from FPM memory to EDO (plus L2 cache) increases performance by 50%, and upgrading from EDO to BEDO or SDRAM provides an additional 50% performance boost. Still, many suppliers of ready-made systems see BEDO only as an intermediate step between EDO and SDRAM due to the inherent speed limitations of BEDO. The SDRAM they expect will be the main memory when selected.

Current needs come from graphics-intensive and computationally intensive applications such as multimedia, servers, digital set-top boxes (systems for home use that combine a TV, stereo system, web browser, etc.), switches ATM, and other network and communication equipment, requiring high throughput and data transfer rates. In the near future, however, industry experts predict that SDRAM will become the new memory standard in personal computers.

The next step in the development of SDRAM has already been made, this is DDR SDRAM or SDRAM II

And I took this step Samsung company, known as largest producer memory chips marked SEC. The release of the new memory will be officially announced in the near future, but some details are already known. The name of the new memory is "Double Data Rate SDRAM" or simply "SDRAM II". The catch is that the new synchronous memory can transfer data on the rising and falling levels of the bus signal, which allows you to increase throughput up to 1.6 Gb/sec at a bus frequency of 100MHz. This will double the memory bandwidth compared to existing SDRAM. It is stated that the new VIA VP3 chipset will provide the ability to use new memory in systems.

Be careful when choosing SDRAM for use in systems based on the i440LX chipset

As practice has shown, motherboards made on the basis of the latest i440LX chipset are very sensitive to the type of SDRAM memory used. This is due to the fact that the new Intel specification SPD for SDRAM defines additional requirements for the content of special information about the DIMM module used, which must be located in a small in volume and size element of electronically programmable memory EPROM, located on the memory module itself. However, this does not mean that any SDRAM module with an EPROM on it complies with the SPD specification, but in particular, it means that a module without an EPROM does not exactly comply with this specification. Some boards based on the i440LX set require only such special modules to operate, but most of the existing ones work perfectly with regular SDRAM modules. This step Intel, by introducing a standard for synchronous memory modules, is associated, first of all, with the desire to provide reliable operation and memory compatibility with the future i440BX chipset, which will already support a bus frequency of 100MHz.