Hello and welcome to our community! Is this your first visit?
Register
Enjoy an ad free experience by logging in. Not a member yet? Register.
Results 1 to 7 of 7

Thread: ODBC buffering

  1. #1
    New to the CF scene
    Join Date
    Apr 2004
    Posts
    5
    Thanks
    0
    Thanked 0 Times in 0 Posts

    ODBC buffering

    Does anyone know if it is possible to get ODBC querys faster? Does it save the result in a buffer somewhere?
    I can do a query, and I have to wait until it looks through the whole file to get my information back. Which isn't a problem because the page will not refresh and post information until I have it all anyway.
    However if I where to use a loop to write the information to file. Is it possible to have it write the information to file as it finds it?
    Rather than looking through the whole file first, and saving the information somewhere in memory or a buffer. Then going through all the information that it found again just to write it to a file?

  • #2
    raf
    raf is offline
    Master Coder
    Join Date
    Jul 2002
    Posts
    6,589
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by spitenmalice
    Does anyone know if it is possible to get ODBC querys faster? Does it save the result in a buffer somewhere?
    I can do a query, and I have to wait until it looks through the whole file to get my information back. Which isn't a problem because the page will not refresh and post information until I have it all anyway.
    However if I where to use a loop to write the information to file. Is it possible to have it write the information to file as it finds it?
    Rather than looking through the whole file first, and saving the information somewhere in memory or a buffer. Then going through all the information that it found again just to write it to a file?
    No idea what you are talking about.

    There are ODBC drivers for textfiles but usually it's used for db-trafic. What sort of buffering exists and how to finetune it is db-specific. mySQL for instance has recordset-buffering but this is not so for al formats.
    Anyway, this is quite a broad area without general rules: if and how you should buffer output depends on your application, sort of data, sort of querys, trafic, ...
    Posting guidelines I use to see if I will spend time to answer your question : http://www.catb.org/~esr/faqs/smart-questions.html

  • #3
    New to the CF scene
    Join Date
    Apr 2004
    Posts
    5
    Thanks
    0
    Thanked 0 Times in 0 Posts

    Resource ID#1

    Well I've found that the ODBC query that I call in PHP, saves the results to a resource id.
    I guess I was wondering if I can at least echo the information that is in that resource id, before all the information is there and the query is actually done.
    I am querying a rather large database and I don't even need to see the information I would just like to save it to a file or echo it to another function. However when I fetch the result it has to grab the information from the resource id. So I assume that the resource id as to be complete and contain all the information from my query.
    I want to know if it is possible to cheat and get at that information in the resource id before my query is even done?

  • #4
    raf
    raf is offline
    Master Coder
    Join Date
    Jul 2002
    Posts
    6,589
    Thanks
    0
    Thanked 0 Times in 0 Posts
    Quote Originally Posted by spitenmalice
    I want to know if it is possible to cheat and get at that information in the resource id before my query is even done?
    With mySQL, i don't think it is possible since it's a disconnected recordset.
    With oracle and DB2 it's possible.

    What's your responsetime like? I know from my performance tests that mySQL is realy fast in selecting records and from the applications i've build, i never needed large numbers of records because i do as much processing as possible with sql and use a LIMIT clause to deliver the content in chunks to the broser (which should always be a primary goal in a db-driven spplication)

    Well I've found that the ODBC query that I call in PHP, saves the results to a resource id.
    Sure about that? the resourceID is just a number like '8' or '2' and works as a reference to the actual resultset.
    I don't think you can do anything between
    $result=mysql_query("select var1 from table");
    How would you? The mysql_query() is either executed or not.
    I am querying a rather large database and I don't even need to see the information I would just like to save it to a file or echo it to another function.
    Hum. Quite vague + i don't see the problem because if you don't need to echo it, then what does it matter if the responsetime is a second or so ?
    Then you are just doing some preprocessing and prepairing some files to read from etc but i frankly doubt it would ever improve performance significantly.
    Posting guidelines I use to see if I will spend time to answer your question : http://www.catb.org/~esr/faqs/smart-questions.html

  • #5
    New to the CF scene
    Join Date
    Apr 2004
    Posts
    5
    Thanks
    0
    Thanked 0 Times in 0 Posts

    ODBC speed

    Well the ODBC data source I am connecting to is kind of a mess and I'm sure the speed could be a problem just with the way I'm connecting. It's a pvx odbc driver that I had to bridge to, because there is no unix client support with the crappy database. I use easysoft's OOB (demo) right now to do some test.

    To query one row with

    PHP Code:
    SELECT column0 FROM table WHERE coulmn1 'var'
    and if it's a database with 29,000 lines it will take about 4 minutes.

    The biggest problem I have is some databases are 300,000+ some are even up to 6 million rows. So it takes a long time for those ones.

    now yes I am trying to make this quicker by moving these databases to mysql. Problem is I would have to do it every night to keep the information up to date. I wish we could just make our own software rather than use this other crappy pvx one. However I'm not the boss.

    so I've made something like this.

    PHP Code:
    $result "select * from $odbctable";
    $data odbc_exec($odbclink$result);
    while (
    $row odbc_fetch_array($data))
    {
    foreach (
    $row as $odbcinfo)
    {
    $mysqlinfo $mysqlinfo.", '".addslashes($odbcinfo)."'";
    }
    $insertsql "INSERT INTO ".$odbctable." (".$odbccolumns.") VALUES (".$mysqlinfo.")";
    mysql_query($insertsql);
    $mysqlinfo "";
    echo 
    mysql_error()."<br>\n";

    so this takes about 3.5 minutes to do for a database with 29,900 rows it doesn't actually start putting that data in until about 1 or so minutes after i've submitted the information. I was wondering what was slowing it down. Now I'm sure it's because it is getting all the information from the query before it will start to submit. Now it's safe to say that getting the information from two different datasource managers, one local, and one on the other side of the bridge in windows, will have something to do with that.
    But is there anyway to make this quicker? I've set the bridge to may different fetch sizes to see if that would help. Nothing yet.
    Better yet is there already a program that can export ODBC databases to mySQL that can be setup to run everynight. Is this a sad little dream that I have and should never expect it to be possible in the 12 hour night that I have to get it done in?
    Last edited by spitenmalice; 05-03-2004 at 09:33 PM.

  • #6
    raf
    raf is offline
    Master Coder
    Join Date
    Jul 2002
    Posts
    6,589
    Thanks
    0
    Thanked 0 Times in 0 Posts
    it impossible to realy comment or advice on this, since we don't know your situation.

    for instance, why do you need to move the complet db's every night? you can set up some sort of incremental refresh and only process the records with a stimestamp from the previous day.
    and why migrate the dat over the webserver? you should create some sort of ofload from your original to a txt file and then load that from a commandline.
    Maybe you have good reasons for all that but we don't have much info to go on.

    the more fundamental sollution is of course switching db-format or optimising your db-design.
    by the way, 6 million records is probably to much to use in a web-environment where you need fast responses. I'm quite sure that some of these records could be archived or so.
    I think the max limit to have reasonable responsetimes is about 2 miljon, and above that, you'll need a more elaborate setup like federated db's or so.
    Posting guidelines I use to see if I will spend time to answer your question : http://www.catb.org/~esr/faqs/smart-questions.html

  • #7
    Super Moderator
    Join Date
    May 2002
    Location
    Perth Australia
    Posts
    4,108
    Thanks
    11
    Thanked 101 Times in 99 Posts
    what is the actual database being utilised ?

    with your data tier so far removed you are going to have delays when fetching large amounts of data (especially via ODBC & moreso using ODBC to ODBC bridge) , but latency on a simple 'SELECT column0 FROM table WHERE coulmn1 = 'var';' should still be in the seconds rather than minutes unless a large dataset is returned ?
    resistance is...

    MVC is the current buzz in web application architectures. It comes from event-driven desktop application design and doesn't fit into web application design very well. But luckily nobody really knows what MVC means, so we can call our presentation layer separation mechanism MVC and move on. (Rasmus Lerdorf)


  •  

    Posting Permissions

    • You may not post new threads
    • You may not post replies
    • You may not post attachments
    • You may not edit your posts
    •