Questions

I have a large database (~180GB data + indexes; a couple very tall but thin tables of ~200-400m rows, and two ~20m "wide" tables) and don't have the resources to simply throw money at the problem and buy tons of RAM. It's running on a dedicated box with 2x500GB 15k drives in RAID1 and 32GB DDR2 memory. For some operations, it's fine now, but since the indexes don't fit in memory, if the sections of interest to a particular query haven't been recently read it can be slow as hell. Percona server 5.5 + InnoDB. I've tuned the InnoDB settings. Are there any good hacks or tips?

Regardless of the number of rows tables have, it's how this data is assigned on disk (and how big that is), and then read into the applicable MySQL memory buffers that matter. The goal is to reduce the memory to disk ratio for "Hot" data. Optimizing indexes is an art. There are also many tricks, like using the right data type (INT not BIGINT), covering indexes (reduces looking at data). Depending on your queries, partitioning may help. Capturing the Query Execution Plan (QEP), learning how to read, understand and improve is necessary to solve your slow queries. http://effectivemysql.com/ has a number of introduction presentations for Optimizing SQL.


Answered 10 years ago

Unlock Startups Unlimited

Access 20,000+ Startup Experts, 650+ masterclass videos, 1,000+ in-depth guides, and all the software tools you need to launch and grow quickly.

Already a member? Sign in

Copyright © 2024 Startups.com LLC. All rights reserved.