Hello and welcome to our community! Is this your first visit?
Register
Enjoy an ad free experience by logging in. Not a member yet? Register.
Page 1 of 2 12 LastLast
Results 1 to 15 of 19

Thread: Caching Queries

  1. #1
    Senior Coder NancyJ's Avatar
    Join Date
    Feb 2005
    Location
    Bradford, UK
    Posts
    3,174
    Thanks
    19
    Thanked 66 Times in 65 Posts

    Caching Queries

    Is there a way to tell mysql to cache a query but dont send me the data? We're writing an autocaching script and I reckon we could speed it up a bit if mysql just ran the query instead of sending back the data too.

  • #2
    raf
    raf is offline
    Master Coder
    Join Date
    Jul 2002
    Posts
    6,589
    Thanks
    0
    Thanked 0 Times in 0 Posts
    i don't understand the question.
    are you talking about recordset buffering?
    Posting guidelines I use to see if I will spend time to answer your question : http://www.catb.org/~esr/faqs/smart-questions.html

  • #3
    Senior Coder NancyJ's Avatar
    Join Date
    Feb 2005
    Location
    Bradford, UK
    Posts
    3,174
    Thanks
    19
    Thanked 66 Times in 65 Posts
    When a query is run mysql caches the result - a cached query result comes from memory and is much faster than an uncached query. eg. I have a query that takes 6-13s to run the first time - but from cache it takes 0.3-0.5s to run.
    6-13s is not an acceptable time for a query to take so we have a script that runs the query after the data is updated so that from the user's point of view it takes only 0.3-0.5s.
    But when we run that query we dont care about getting the result back - just it being put in mysql's query cache.

  • #4
    raf
    raf is offline
    Master Coder
    Join Date
    Jul 2002
    Posts
    6,589
    Thanks
    0
    Thanked 0 Times in 0 Posts
    thanks for the lesson --> this is generally known as recordset buffering.

    if you want to even further optimise the build in buffering, then why not use a temporarely or permanent memory-table --> you just populate/refresh that with the result of your current select, and then you run your querys against this memory table?
    Posting guidelines I use to see if I will spend time to answer your question : http://www.catb.org/~esr/faqs/smart-questions.html

  • #5
    Senior Coder NancyJ's Avatar
    Join Date
    Feb 2005
    Location
    Bradford, UK
    Posts
    3,174
    Thanks
    19
    Thanked 66 Times in 65 Posts
    I have considered the possibility of using memory tables after extensive research discovered they are no good for us because of the types of queries we run

    Also - as to this being generally known as 'recordset buffering' - I have found nothing in google to suggest this is the case - at least in relation to mysql. In fact the ONLY results I can find in google that mention 'recordset buffering' with mysql are posts on this forum, by YOU.
    Last edited by NancyJ; 09-20-2006 at 11:14 AM.

  • #6
    raf
    raf is offline
    Master Coder
    Join Date
    Jul 2002
    Posts
    6,589
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by NancyJ View Post
    ... after extensive research discovered they are no good for us because of the types of queries we run
    what's so special about these querys that caching the recordset through mysql's build in recordset buffering would be helpfull, but storing the resultset in a memory table would not be?
    it shouldn't take more then the "0.3-0.5s" to run a select on a memory table...
    Posting guidelines I use to see if I will spend time to answer your question : http://www.catb.org/~esr/faqs/smart-questions.html

  • #7
    Senior Coder NancyJ's Avatar
    Join Date
    Feb 2005
    Location
    Bradford, UK
    Posts
    3,174
    Thanks
    19
    Thanked 66 Times in 65 Posts
    Memory tables dont support text or blob fields.
    I also dont see how creating a memory table of the result of every select would be more useful than using mysql's query cache.

  • #8
    raf
    raf is offline
    Master Coder
    Join Date
    Jul 2002
    Posts
    6,589
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by NancyJ View Post
    I also dont see how creating a memory table of the result of every select would be more useful than using mysql's query cache.
    well, i'm not saying this is the way you should go. in fact, i know next to nothing about your actual situation because you give next to no info on it.

    i was just proposing it since
    - you want to have a resultset cached without the data being sent, which as far as i know is not possible and which is certainly not the intended use of the query cache;
    - memory-tables also allow very fast access, which seemed to be your main concern (not sure what your actual problem is anymore)
    - memory tables can be populated with the select that you now use to create the cached resultset, without any need to send the selects result to a client
    - memory-tables are never written to disk, while this is done automaticly with your cached resultsets when it reaches the size-limit, so they might offer better performance then your cached resultset
    - memory-tables can be updated while the cached resultset needs to be replaced, so it might be more efficient to have your autocaching script update a memorytable then to create a new resultset every time (which apparently takes 6-13s)
    - memory-tables offer far more flexability to filter and sort on them, then you could ever get from a cached resultset, thus making it a lott more efficient to select parts of the data. on top of that, you can create additional indexes on them, which would boost performance to get parts of the resultset even more
    - memorytables can be used in subselects or joined with other tables, which offers a lott of possebilitys to enrich the data from your initial resultset with data that you pull from other tables. like storing only the fact-data in your memorytable and then get your descriptive data from the related dimensiontables when you need them.
    Last edited by raf; 09-20-2006 at 02:18 PM.
    Posting guidelines I use to see if I will spend time to answer your question : http://www.catb.org/~esr/faqs/smart-questions.html

  • #9
    Senior Coder NancyJ's Avatar
    Join Date
    Feb 2005
    Location
    Bradford, UK
    Posts
    3,174
    Thanks
    19
    Thanked 66 Times in 65 Posts
    you also need to have enough memory to hold all the data... out of interest - since you're here what type of locking do memory tables do?

  • #10
    raf
    raf is offline
    Master Coder
    Join Date
    Jul 2002
    Posts
    6,589
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by NancyJ View Post
    you also need to have enough memory to hold all the data...
    true. but than again:
    - your memorytables should ideally only contain facts-data (like foraign keys to dimensiontables or aggregated figures etc) so no lengthy values (or columns with variable lengths) --> so the recordsize should be relatively small...
    - it's like the difference between lower and higher level coding languages: if you control yourself what is kept in memory (like using memory-tables) then you'll have a lott more to take care of, then if you let the rdbm take care of it (as with resultset caching), but it allows you to be more selective which could lead to better performance.
    - for the cached querys, you also need to consider the memory they occupie. and if you exceed the size limit, they'll be writte to disk. so the amount of FREE memory is even more important if you automatically cache your querys...
    Quote Originally Posted by NancyJ View Post
    out of interest - since you're here what type of locking do memory tables do?
    it's table level locking (like the MyIsam engine) --> http://dev.mysql.com/doc/refman/5.0/...l-locking.html
    Posting guidelines I use to see if I will spend time to answer your question : http://www.catb.org/~esr/faqs/smart-questions.html

  • #11
    Senior Coder NancyJ's Avatar
    Join Date
    Feb 2005
    Location
    Bradford, UK
    Posts
    3,174
    Thanks
    19
    Thanked 66 Times in 65 Posts
    If its table level locking then it wont be much more use than caching anyway since it will lock up when you try to change the data - therefore losing that advantage over caching since we would have to run 2 memory tables and switch them
    You have to understand we have clients who will want to update 750k-1million records in 1 go

  • #12
    raf
    raf is offline
    Master Coder
    Join Date
    Jul 2002
    Posts
    6,589
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by NancyJ View Post
    If its table level locking then it wont be much more use than caching anyway since it will lock up when you try to change the data - therefore losing that advantage over caching since we would have to run 2 memory tables and switch them
    You have to understand we have clients who will want to update 750k-1million records in 1 go
    we'll, i never clocked it, but i'd be very surprised if the disadvantage of table level locking on a memory-table would be noticable. memory-tables are typically for temporarely data, so it makes sense that the engine is optimized for reads and very selective updates and thus uses table-level locking.
    if we're talking about frequently updating 750k - 1°° records then a memory-table wount be the best sollution.
    with this kind of figures, using a memory-table or even caching the resultsets will never be ideal.
    Posting guidelines I use to see if I will spend time to answer your question : http://www.catb.org/~esr/faqs/smart-questions.html

  • #13
    Senior Coder NancyJ's Avatar
    Join Date
    Feb 2005
    Location
    Bradford, UK
    Posts
    3,174
    Thanks
    19
    Thanked 66 Times in 65 Posts
    Maybe I'm doing something wrong but I just tried using a memory table and joining to info tables and it took 2.5s to return the same results as the system I'm currently using for that query which returns the result in 0.03s with caching turned off.

  • #14
    Senior Coder NancyJ's Avatar
    Join Date
    Feb 2005
    Location
    Bradford, UK
    Posts
    3,174
    Thanks
    19
    Thanked 66 Times in 65 Posts
    Quote Originally Posted by raf View Post
    we'll, i never clocked it, but i'd be very surprised if the disadvantage of table level locking on a memory-table would be noticable. memory-tables are typically for temporarely data, so it makes sense that the engine is optimized for reads and very selective updates and thus uses table-level locking.
    if we're talking about frequently updating 750k - 1°° records then a memory-table wount be the best sollution.
    with this kind of figures, using a memory-table or even caching the resultsets will never be ideal.
    The system we're trialling at the moment is very promising - our autocaching script can run 24 hours worth of unique searches in 1 hour. The obvious downside to the system is clients will have to wait 2 hours for changes they make to take effect - I'm trying to reduce that time (mostly by speeding up queries but I thought I'd ask this since it was a random idea I had that I thought might make a minor speed increase)

  • #15
    raf
    raf is offline
    Master Coder
    Join Date
    Jul 2002
    Posts
    6,589
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by NancyJ View Post
    The obvious downside to the system is clients will have to wait 2 hours for changes they make to take effect - I'm trying to reduce that time (mostly by speeding up queries but I thought I'd ask this since it was a random idea I had that I thought might make a minor speed increase)
    like i said --> i don't know your situation so it's hard to comment or make suggestions.
    i suppose your ideal situation is that these changes are made in real-time and that these large volumes of data can be queried very quickly. I never had a chance to work with it, buth the NDB storage engine seems to be ideal for that sort of situations.
    apart from such high profile sollution, i think you can only do the normal dba stuff (keeping the tables as small as possible through archiving etc, vacuming your tables, indexing, reducing the amount of joins/subselects, sort your data, horizontal scaling, ...) and have your boss/client balance out how much they are prepared to invest in having a very performant, real-time, application.
    Posting guidelines I use to see if I will spend time to answer your question : http://www.catb.org/~esr/faqs/smart-questions.html


  •  
    Page 1 of 2 12 LastLast

    Posting Permissions

    • You may not post new threads
    • You may not post replies
    • You may not post attachments
    • You may not edit your posts
    •