• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

FEATURED Warning about SSD caching (SRT) on new Z68

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Oh....Here I was thinking that it acted like the internal 64MB cache on the HDD. Would be nice if it acted as that, or even CPU cache...o_O

Edit: From my understanding of Memory, Storage is permanent data that is accessible, is sent to the cache bit by bit, once the cache is full(which is temp memory) it is then sent the the CPU to process. Depending on your processor, 2-12mb at a time. Once that information Pass is complete, it sends a command to delete that pass of data to make more room. That is what RAM is as well, Cache, at least from the way I understand it. Temporary data storage to expediate HDD read times.
 
Last edited:
Dude, you are all over the place... LOL! CPU cache is WAY faster than an SSD IIRC!!! So it wouldnt be nice at all!

ESSENTIALLY it does act like internal cache, but it is storage on another drive as that SSD also has its own internal cache.
 
I know CPU cache is far faster than SSD cache, and I don't see how I'm all over the place.
I know that DDR3 RAM is faster than SSD cache, SSD's are essentially DDR memory in a Drive form.
What I was trying to say is that while slow at the moment, it is the concept of using extra memory to let the CPU work with larger sections of data at a time. The main reason its Fast on the cpu, is the distance from the cores. its nanometers away opposed to 3+ inches. Less travel time=Faster response.

Cache for a Cache drive...

Let me rephrase my last post into something coherent as possible.

Storage drives(ANY form of storage, be it SSD, HDD, or some other technology for permanent storage. Its Data that is NOT temporary access for the processor.)

Basically, If a SSD is running as designed, it is Permanent storage.

Cache(ANY form of Temporary memory that is used to store temporary data to be processed, deleted to clear for more temp data when new applications are launched. This includes RAM, Random Access Memory, which modern day DDR3 is MUCH faster than associated SSD Cache.)

If an SSD is running as SRT Cache, it is temporary storage.

What I'm trying to conceptualize, is that if you have MORE cache for your PERMANENT storage drives, that speeds up the drive by having the CPU have a pool to pull from, and not have to keep sending signals to the HDD to bring more data in. The intial load is a little bit slower, but it speeds up the entire process by skipping the signal to send new cache.

What I was saying about CPU cache, is to slowly make it faster, maybe as fast as DDR5 Vram, if not faster, to decrease the amnout of work the cpu spends on filling and depleting its internal cache, and that way it does Less work with the same data, resulting in things like less heat.
 
Last edited:
SSD's are essentially DDR memory in a Drive form.
It's actually much, much slower.

The main reason its Fast on the cpu, is the distance from the cores. its nanometers away opposed to 3+ inches. Less travel time=Faster response.
The architecture of the memory types is also significantly different.

What I was saying about CPU cache, is to slowly make it faster, maybe as fast as DDR5 Vram
The CPU L1 and L2 (and L3, if applicable) are still much faster than any kind of RAM-type hardware.

I'm still a little confused about what you're saying. Essentially, this hierarchy exists:

CPU Registers
CPU L1
CPU L2
Main memory
---------------------------
SSD cache (if applicable)
HDD

Above the line is memory available to the CPU. Below the line is not directly addressable by the CPU.

The memory on SSDs and HDDs (the actual DRAM cache they have) is usually workspace for the drive controller, not really a readthrough/writethrough cache. On some drives it probably functions that way, but I don't *think* that's always the case. In any event, it doesn't really matter...you can just lump it into its respective device in the hierarchy.

It sounds like you're trying to shortcut through layers of the caching hierarchy, and that doesn't really make sense. All of these layers work to accomplish the end goal - getting more data to the CPU more quickly.

Edit: From my understanding of Memory, Storage is permanent data that is accessible, is sent to the cache bit by bit, once the cache is full(which is temp memory) it is then sent the the CPU to process. Depending on your processor, 2-12mb at a time. Once that information Pass is complete, it sends a command to delete that pass of data to make more room. That is what RAM is as well, Cache, at least from the way I understand it.
That's not really how a cache works. It doesn't try to fill the cache, process it, and then flush it. Rather, the CPU processes instructions and fetches data as necessary. If that data isn't in the L1 cache, it'll generally go out to the L2, and then to memory. When it finds the data, it may write it back to the L1 or L2 cache in case it needs it again in the future.

It's the application's job to load data into main memory so that the CPU can access it. An SSD cache will speed that process up. It'll also speed up the process of writing main memory pages to and from disk in the event that main memory fills up and a pagefile is necessary.
 
One is still limited by the speed of the SSD as cache. Even though its running LIKE cache, its still not as fast as its onboard cache or CPU cache. That said, the more RST drive available the more it stores and the more it 'speeds up' the drive so you are essentially correct if I understand you correctly...but not in the way you explained it.

CPU cache is faster than DDR5 ram. Run Everest memory benchmarks that test cache and look at the results.

BUT we are drifting a bit off topic, so Im going to leave it at this. If you want to clarify further, feel free to message me or start another thread on your dreams of speeding up...........what ever you want to speed up. :)

Id give thanks, but Im spent for the day. :rolleyes:
 
Last edited:
First, Please don't delete this thread... The discussion may seem frustrating, but it is a helpful tool for some of us to see the progression of thoughts.

This discussion brings me to a really interesting question. How would Intel handle a defrag? From what I hear a defrag could make the entire solution a waste, and tear apart the SSD. Also is TRIM still supported on this new hybrid drive?

All I want is Random Access to be faster... But to accomplish this requires more then a simple hardware solution. If it is to be handled like a Cache, then programers should be able to optimize for it and OS should be integrated with it.... Instead we are stuck with a half cocked 60gig limited solution...
 
All I want is Random Access to be faster... But to accomplish this requires more then a simple hardware solution. If it is to be handled like a Cache, then programers should be able to optimize for it and OS should be integrated with it.... Instead we are stuck with a half cocked 60gig limited solution...
I feel like they've done the right thing here. Managing it with hardware means that the OS doesn't need to be involved, which simplifies things considerably. This way, the device is presented to the OS as a single storage device, and the Intel drivers can manage things without the OS needing to be aware. Programmers shouldn't have to deal with programming around specific hardware configurations. That's what operating systems are for. If someone really needs a set of data to always be accessible with SSD-like speeds all the time, then that data belongs on an SSD, not on a cached hard drive. Otherwise, the caching is added as an additional layer of abstraction and no one needs to worry about it except Intel. It's great for typical consumer usage patterns.

This discussion brings me to a really interesting question. How would Intel handle a defrag? From what I hear a defrag could make the entire solution a waste, and tear apart the SSD. Also is TRIM still supported on this new hybrid drive?
Good question. It depends on how Intel does the mapping from SSD to HDD, I guess. At some level the SSD must be aware of where data sits on the HDD, so if every block on the SSD is associated with a block address on the HDD, then all that needs to happen when a file changes physical location on disk is a simple address update on the SSD. It would be a lot less writing than would actually happen on the HDD, and it would only apply to files that actually get defragged.

I would guess Intel is utilizing TRIM where available, since they've been pushing it since the beginning.

By the way, I would wager that their caching scheme is less naive than just reading through and writing through. That is, if I have a 40GB HD video file and a 40GB SSD cache drive, I would be very surprised if playing back the video evicted everything else from the cache. Also (this applies to most people reading this board) this kind of system is ideal for your friends and parents, and probably not quite as ideal for you. :)
 
I disagree with GJelly's assesment that this is half cocked. I think the implementation as it stands is a solid solution for those that want improvements over their standard HDD. If you can afford a 60GB or more SSD, then you should be putting your OS and some/most applications on it anyway, so that limitation doesnt bother me in the least.

As far as the defrag, I would have to imagine that since its on an SSD, it doesnt defrag like it normally wouldnt in a non RST environment. You dont defrag SSD's as it messes up the drive map. Its not that the data on the caching SSD is MOVED from the HDD to the SSD, so there wouldnt be a need to defrag it. Im guessing there are probably some leaders for specific data sets that just point to the SSD as opposed to the HDD.

@ Johan - your input is quite valuable in this thread my friend. :)
 
IRST is smart enough to know when you are running a virus scan, I'm sure it also recognizes defragging.
 
Gigabyte has a Z68 board with a preinstalled 20gb SSD.Sorry if this was already posted
http://www.newegg.com/Product/Product.aspx?Item=N82E16813128505

Given that this mobo is about $100 more than its non-msata mobo counterpart, and you could buy an Agility 3 for the same amount after rebate, wouldn't that be a no brainer? I know that the 20GB drive is SLC but still..

Edit: After reading the Anandtech article someone like me who uses a large assortment of big applications would run out of room with just 20GB.


I'm thinking of doing this actually. Why not give Z68 a shot and speed up my v-raptor.
 
Last edited:
Edit: After reading the Anandtech article someone like me who uses a large assortment of big applications would run out of room with just 20GB.
But they didn't see much improvement going over 40GB, IIRC. It's interesting.
 
Right but SATA II 40GB costs about the same as a SATA III 60GB so why go low? I wonder if spinners actually utilize SATA III when set up for IRST..
 
Also (this applies to most people reading this board) this kind of system is ideal for your friends and parents, and probably not quite as ideal for you. :)

I agree with this 100%.... If they sold 20-40 SATAIII drives for less then $100 I would consider adding this to My Mechanical Drive... But to trust a $280 Drive to this Cache, is a bit over my tolerance...
 
SataII vs III shouldnt even be a concern in this case. There are SSD's available at the price you are looking for that would work, and well. Even the slowest modern SSD's (think back to original Vertex, or even Summit, and Intel X-25) would VASTLY improve performance of a HDD.
 
Maybe in the future... Right now... Im broke... That is also contingent on the final answers regarding other features/problems/issues with the tech...
 
But to trust a $280 Drive to this Cache, is a bit over my tolerance...
Yeah, I don't think that's really the use case they were thinking of, and I agree that that would be a huge waste.

They're targeting the crowd that wants to benefit from SSDs on a budget, not drop $200+ on a new drive and then go through the hassle of setting up a multi-drive system. The simplicity of this caching feature is the big win. If someone were able to come up with a reliable, 20GB SSD for around $60-$80 (it wouldn't need to be super fast) then this would become a great mainstream solution.

X25-V, maybe?
 
X25-V sounds like a prime candidate, so does its lower cost Kinsgtson versions too. Like I said, plenty for less than $100...

(now watch that threshold fall even more...waffles).
 
Back