Tuesday, March 27, 2012

Copying/ Deleting Huge Data from a table

Hi,
I have a problem about "userlogs" table . this table contains some
information about users actions .The table has nearly 800.000.000 record in
it and ~100Gb data , 50GB index.
i have tried to copy some data ( like 1M records) to a temporary database, i
couldnt manage it
This is the query , the table has a index contains only "id"
declare @.start_index int, @.end_index int
set @.start_index = 800000050
set @.end_index = 800000250
insert into userlogsbackup select * from userlogs where id > @.start_index
and id < @.end_index
but the select fails unless specifying "top 10" for example. What else can
i try ?
Thanks in advanceIf the index is not clustered then it can't do a range scan without also
doing bookmark lookups for each row found. So the more rows you will touch
the less the chances of using the index. But in your case the optimizer does
not know how many rows will be affected since it does not know the values of
the two variables. As such it guesses and the guess for a range is quite a
large percentage (can't remember off hand) which will negate the use of an
index s. Do you get the proper plan if you hard code the two values?
You can try creating a stored procedure that has the two values as
parameters. That way the optimizer will use the actual values passed the
first time to create the plan. Does it work with a TOP 250?
Andrew J. Kelly SQL MVP
"Hasan O." <hozavalsiz@.gmail.com> wrote in message
news:esQBHkKgGHA.1264@.TK2MSFTNGP05.phx.gbl...
> Hi,
> I have a problem about "userlogs" table . this table contains some
> information about users actions .The table has nearly 800.000.000 record
> in it and ~100Gb data , 50GB index.
> i have tried to copy some data ( like 1M records) to a temporary database,
> i couldnt manage it
> This is the query , the table has a index contains only "id"
> declare @.start_index int, @.end_index int
> set @.start_index = 800000050
> set @.end_index = 800000250
>
> insert into userlogsbackup select * from userlogs where id >
> @.start_index and id < @.end_index
> but the select fails unless specifying "top 10" for example. What else
> can i try ?
> Thanks in advance
>|||Hi Andrew ,
thanks for your answer.
yes index is not clustered . it doesnt work with top 70 .
I think i am gonna write while loop for selecting , and inserting one by one
like this
while @.start_index and < @.end_index
begin
1
2 insert into userlogsbackup select * from userlogs where id =
@.start_index
3 delete from userlogs where id = @.start_index
4 set @.start_index = @.start_index + 1
5
end
i am also thinking to use "begin transaction and commit transaction" . Can
you tell me if which lines transactions contains
thanks again
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:uaZyrgLgGHA.4940@.TK2MSFTNGP05.phx.gbl...
> If the index is not clustered then it can't do a range scan without also
> doing bookmark lookups for each row found. So the more rows you will touch
> the less the chances of using the index. But in your case the optimizer
> does not know how many rows will be affected since it does not know the
> values of the two variables. As such it guesses and the guess for a range
> is quite a large percentage (can't remember off hand) which will negate
> the use of an index s. Do you get the proper plan if you hard code the
> two values? You can try creating a stored procedure that has the two
> values as parameters. That way the optimizer will use the actual values
> passed the first time to create the plan. Does it work with a TOP 250?
>
> --
> Andrew J. Kelly SQL MVP
>
> "Hasan O." <hozavalsiz@.gmail.com> wrote in message
> news:esQBHkKgGHA.1264@.TK2MSFTNGP05.phx.gbl...
>|||Maybe if you post the actual DDL for the table including all the indexes and
tell us exactly what you are tyring to accomplish we can suggest something
better. Do you want to move any 1 million rows or a specific set? Is this a
one time thing or will it be repeated? Can it be done in off hours?
Andrew J. Kelly SQL MVP
"Hasan O." <hozavalsiz@.gmail.com> wrote in message
news:eJs5eNMgGHA.1264@.TK2MSFTNGP05.phx.gbl...
> Hi Andrew ,
> thanks for your answer.
> yes index is not clustered . it doesnt work with top 70 .
> I think i am gonna write while loop for selecting , and inserting one by
> one like this
> while @.start_index and < @.end_index
> begin
> 1
> 2 insert into userlogsbackup select * from userlogs where id =
> @.start_index
> 3 delete from userlogs where id = @.start_index
> 4 set @.start_index = @.start_index + 1
> 5
> end
> i am also thinking to use "begin transaction and commit transaction" . Can
> you tell me if which lines transactions contains
> thanks again
>
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:uaZyrgLgGHA.4940@.TK2MSFTNGP05.phx.gbl...
>|||>but the select fails unless specifying "top 10" for example.
What is the exact nature of the failure? What error code and message
is returned? Or is it simply taking a long time?
Roy Harvey
Beacon Falls, CT
On Fri, 26 May 2006 12:44:41 +0300, "Hasan O." <hozavalsiz@.gmail.com>
wrote:

>Hi,
>I have a problem about "userlogs" table . this table contains some
>information about users actions .The table has nearly 800.000.000 record in
>it and ~100Gb data , 50GB index.
>i have tried to copy some data ( like 1M records) to a temporary database,
i
>couldnt manage it
>This is the query , the table has a index contains only "id"
>declare @.start_index int, @.end_index int
>set @.start_index = 800000050
>set @.end_index = 800000250
>
>insert into userlogsbackup select * from userlogs where id > @.start_index
>and id < @.end_index
>but the select fails unless specifying "top 10" for example. What else can
>i try ?
>Thanks in advance
>|||"Roy Harvey" <roy_harvey@.snet.net> wrote in message
news:cu3e72d978qc99vfo1mclnqhudtgh31r71@.
4ax.com...
> What is the exact nature of the failure? What error code and message
> is returned? Or is it simply taking a long time?
taking too much time so i cancel the query.
i am using one by one move operation now .
thanks for the answer.

No comments:

Post a Comment