Showing posts with label writing. Show all posts
Showing posts with label writing. Show all posts

Tuesday, March 6, 2012

Adding multiple textboxes to PageHeader

I am creating matrix reports programmatically and have come up against a problem where I can only add a single textbox into the pageheader. Writing the report using the report designer software I can add multiple textboxes to the pageheader, however trying to add many of them using the ReportItem object I keep getting a syste.object[] cannot be used in this context. error.

for (int i = 0; i < Items.Count; i++)
{
reportItems.Items = new object[] { CreateTextBox(ItemsIdea.ItemName, ItemsIdea.ItemMessage, ItemsIdea.ItemStyle) };
}

This is the line of code that I am hitting the problem. This works fine, however only inserts the last textbox in the Items Array.

If I change the code to

for (int i = 0; i < Items.Count; i++)

{

reportItems.ItemsIdea = new object[] { CreateTextBox(ItemsIdea.ItemName, ItemsIdea.ItemMessage, ItemsIdea.ItemStyle) };

}

It executes, but fails when trying to render it into XML.

Has anyone managed to get multiple textboxes programatically into the pageheader?In the first approach you are overwriting the Items array with each iteration of the for loop. And in the second approach you are setting each entry in the Items array to a new object array, so you end up with an array of arrays. Instead what you should do is just add the object to the array. Try using the following code.

// create a new array for the text boxes in the header
reportItems.Items = new object[Items.Count];

// iterate though all the Items creating a textbox, which is added to the report items object array
for (int i = 0; i < Items.Count; i++)
{
reportItems.ItemsIdea = CreateTextBox(ItemsIdea.ItemName, ItemsIdea.ItemMessage, ItemsIdea.ItemStyle);
}

Ian

Friday, February 24, 2012

Adding entry to DB and getting unique ID at the same time

Hi all,

I'm writing a website with Cold Fusion and when a user submits a
request and it's stored in the MS SQL database, I want the unique ID
(Identity field in table) to be given to the user on screen plus
emailed to user.

Now can I store data to the database (where the ID is created) and
return this as a variable in the same statement? I've seen this done
on many websites, but I have no idea how to do it in one step.

Thanks,

Alex.Return to the user the value from @.@.identity (check this out in the BOL).
You can also have SQL trigger off the email if this fits within your
project's stated performance requirements. Send the email via
xp_smtp_sendmail(http://sqldev.net/xp/xpsmtp.htm) But basically your
requirement to do it in 1 step could all be handled with a stored procedure.

hth
Eric

"Alex" <alex@.totallynerd.com> wrote in message
news:2ba4b4eb.0401291136.6ec0ee16@.posting.google.c om...
> Hi all,
> I'm writing a website with Cold Fusion and when a user submits a
> request and it's stored in the MS SQL database, I want the unique ID
> (Identity field in table) to be given to the user on screen plus
> emailed to user.
> Now can I store data to the database (where the ID is created) and
> return this as a variable in the same statement? I've seen this done
> on many websites, but I have no idea how to do it in one step.
> Thanks,
> Alex.

Thursday, February 16, 2012

Adding Data Files

I inherited a server with a database that has 3 data files in the primary
filegroup, but SQL Server is only writing to the first one. It looks like
the 2nd and 3rd files were not created when the database was created, but
were added on later. The initial data file is 164 Gb in size and the 2nd an
d
3rd are still 1 Mb each. Any suggestions on why SQL Server is only writing
to the 1st data file?
Thanks,
HariSQL Server writes the data in each file using a proportional fill algorithm.
This algorithm determines the amount of free space in each file and splits
the data based on the % of free space in each file. If the file is only 1MB
it has no free space compared to the 164GB file. This means all or most of
the data goes to that one. Make the files much larger and you will start to
see data migrate over as you add data. All data files in the same file group
should be the same size so the data is spread evenly across all of them. If
you increase the size and reindex you will see the data start to get more
proportional over time.
Andrew J. Kelly SQL MVP
"Hari Seldon" <HariSeldon@.discussions.microsoft.com> wrote in message
news:AE0BAAC1-7BF5-4F0A-99E0-20E0DE0929B0@.microsoft.com...
>I inherited a server with a database that has 3 data files in the primary
> filegroup, but SQL Server is only writing to the first one. It looks like
> the 2nd and 3rd files were not created when the database was created, but
> were added on later. The initial data file is 164 Gb in size and the 2nd
> and
> 3rd are still 1 Mb each. Any suggestions on why SQL Server is only
> writing
> to the 1st data file?
> Thanks,
> Hari|||Thanks. I had wondered if the mistake they made was in not making the
initial file size on the second two files the same as the size of the initia
l
file - or at least larger than 1 Mb.
I'm trying to maintain this server until I can upgrade it to SQL Server 2005
and migrate to a more suitable environment. Unfortunately, the server has
only a single RAID 5 array to place all of the data and log files on and
performance is a real problem. Is there any performance benefit to having
multiple data files in a database when all of them are going to be located o
n
the same physical drives anyway? Also, is there any benefit to placing them
on different logical partitions on the array or does the fact that they're
still on the same physical array negate any benefit?
Hari
"Andrew J. Kelly" wrote:

> SQL Server writes the data in each file using a proportional fill algorith
m.
> This algorithm determines the amount of free space in each file and splits
> the data based on the % of free space in each file. If the file is only 1M
B
> it has no free space compared to the 164GB file. This means all or most of
> the data goes to that one. Make the files much larger and you will start
to
> see data migrate over as you add data. All data files in the same file gro
up
> should be the same size so the data is spread evenly across all of them. I
f
> you increase the size and reindex you will see the data start to get more
> proportional over time.
> --
> Andrew J. Kelly SQL MVP
> "Hari Seldon" <HariSeldon@.discussions.microsoft.com> wrote in message
> news:AE0BAAC1-7BF5-4F0A-99E0-20E0DE0929B0@.microsoft.com...
>
>|||I would have thought you would be better off using emptyfile and dropping
them.
I can't think of a benefit.
Raid 5 is generally slow on writes and will probably be hurting your logfile
most.
If you are using a battery backed up raid controller you could try
dedicating the cache 100% to writes and see if that helps.
There always a risk with caching writes but if there's a battery there it is
minimised.
If you could get the budget to get an extra pair of disks as a mirror for
the log you should get a decent benefit.
Even a pair of IDE/SATA with NT s/w mirroring would be better than nothing.
Try pointing out the the purse holders that if a disk failed on raid 5 the
performance while running in phantom mode would probably bring the machine
to it's knees.
Paul|||I agree - I would prefer to just delete them if there's no benefit to having
them, but wanted to verify it first.
I probably should have gone into a little more detail about how the database
is used. We do a bulk insert once a day and the rest of the time it's used
for reads only, so the transaction logs aren't much of a factor in this case
.
The tables are rather large and the only reason to keep the extra files
would be if it would help speed up queries. The server has a dual core
processor if that makes any difference. This is SQL Server 2000 Standard
Edition running on a Windows 2003 server.
We have new servers and lots of hard drives on order, so I just need to
tread water for a little longer.
Thanks,
Hari
"Paul Cahill" wrote:

> I would have thought you would be better off using emptyfile and dropping
> them.
> I can't think of a benefit.
> Raid 5 is generally slow on writes and will probably be hurting your logfi
le
> most.
> If you are using a battery backed up raid controller you could try
> dedicating the cache 100% to writes and see if that helps.
> There always a risk with caching writes but if there's a battery there it
is
> minimised.
> If you could get the budget to get an extra pair of disks as a mirror for
> the log you should get a decent benefit.
> Even a pair of IDE/SATA with NT s/w mirroring would be better than nothing
.
> Try pointing out the the purse holders that if a disk failed on raid 5 the
> performance while running in phantom mode would probably bring the machine
> to it's knees.
> Paul
>
>|||It's query tuning and sneaking some extra memory in till then. Bear in mind
that some of your queries may be writing to tempdb.
We keep our tempdb on separate spindles. From what I have read, 2005 makes
much heavier use of tempdb especially with the new isolation levels (Row
level versioning).
Interesting little article by Tony Rogerson.
http://sqlblogcasts.com/blogs/tonyr.../08/24/958.aspx|||I agree with Paul in that for your current situation you may be better off
dropping those files altogether.
Andrew J. Kelly SQL MVP
"Hari Seldon" <HariSeldon@.discussions.microsoft.com> wrote in message
news:C12EE151-5272-4697-B057-E773C64AADDD@.microsoft.com...[vbcol=seagreen]
>I agree - I would prefer to just delete them if there's no benefit to
>having
> them, but wanted to verify it first.
> I probably should have gone into a little more detail about how the
> database
> is used. We do a bulk insert once a day and the rest of the time it's
> used
> for reads only, so the transaction logs aren't much of a factor in this
> case.
> The tables are rather large and the only reason to keep the extra files
> would be if it would help speed up queries. The server has a dual core
> processor if that makes any difference. This is SQL Server 2000 Standard
> Edition running on a Windows 2003 server.
> We have new servers and lots of hard drives on order, so I just need to
> tread water for a little longer.
> Thanks,
> Hari
> "Paul Cahill" wrote:
>|||That's what I'm going to do. Thanks for the tips!
Hari
"Andrew J. Kelly" wrote:

> I agree with Paul in that for your current situation you may be better off
> dropping those files altogether.
> --
> Andrew J. Kelly SQL MVP
> "Hari Seldon" <HariSeldon@.discussions.microsoft.com> wrote in message
> news:C12EE151-5272-4697-B057-E773C64AADDD@.microsoft.com...
>
>

Adding Data Files

I inherited a server with a database that has 3 data files in the primary
filegroup, but SQL Server is only writing to the first one. It looks like
the 2nd and 3rd files were not created when the database was created, but
were added on later. The initial data file is 164 Gb in size and the 2nd and
3rd are still 1 Mb each. Any suggestions on why SQL Server is only writing
to the 1st data file?
Thanks,
HariSQL Server writes the data in each file using a proportional fill algorithm.
This algorithm determines the amount of free space in each file and splits
the data based on the % of free space in each file. If the file is only 1MB
it has no free space compared to the 164GB file. This means all or most of
the data goes to that one. Make the files much larger and you will start to
see data migrate over as you add data. All data files in the same file group
should be the same size so the data is spread evenly across all of them. If
you increase the size and reindex you will see the data start to get more
proportional over time.
--
Andrew J. Kelly SQL MVP
"Hari Seldon" <HariSeldon@.discussions.microsoft.com> wrote in message
news:AE0BAAC1-7BF5-4F0A-99E0-20E0DE0929B0@.microsoft.com...
>I inherited a server with a database that has 3 data files in the primary
> filegroup, but SQL Server is only writing to the first one. It looks like
> the 2nd and 3rd files were not created when the database was created, but
> were added on later. The initial data file is 164 Gb in size and the 2nd
> and
> 3rd are still 1 Mb each. Any suggestions on why SQL Server is only
> writing
> to the 1st data file?
> Thanks,
> Hari|||Thanks. I had wondered if the mistake they made was in not making the
initial file size on the second two files the same as the size of the initial
file - or at least larger than 1 Mb.
I'm trying to maintain this server until I can upgrade it to SQL Server 2005
and migrate to a more suitable environment. Unfortunately, the server has
only a single RAID 5 array to place all of the data and log files on and
performance is a real problem. Is there any performance benefit to having
multiple data files in a database when all of them are going to be located on
the same physical drives anyway? Also, is there any benefit to placing them
on different logical partitions on the array or does the fact that they're
still on the same physical array negate any benefit?
Hari
"Andrew J. Kelly" wrote:
> SQL Server writes the data in each file using a proportional fill algorithm.
> This algorithm determines the amount of free space in each file and splits
> the data based on the % of free space in each file. If the file is only 1MB
> it has no free space compared to the 164GB file. This means all or most of
> the data goes to that one. Make the files much larger and you will start to
> see data migrate over as you add data. All data files in the same file group
> should be the same size so the data is spread evenly across all of them. If
> you increase the size and reindex you will see the data start to get more
> proportional over time.
> --
> Andrew J. Kelly SQL MVP
> "Hari Seldon" <HariSeldon@.discussions.microsoft.com> wrote in message
> news:AE0BAAC1-7BF5-4F0A-99E0-20E0DE0929B0@.microsoft.com...
> >I inherited a server with a database that has 3 data files in the primary
> > filegroup, but SQL Server is only writing to the first one. It looks like
> > the 2nd and 3rd files were not created when the database was created, but
> > were added on later. The initial data file is 164 Gb in size and the 2nd
> > and
> > 3rd are still 1 Mb each. Any suggestions on why SQL Server is only
> > writing
> > to the 1st data file?
> >
> > Thanks,
> > Hari
>
>|||I would have thought you would be better off using emptyfile and dropping
them.
I can't think of a benefit.
Raid 5 is generally slow on writes and will probably be hurting your logfile
most.
If you are using a battery backed up raid controller you could try
dedicating the cache 100% to writes and see if that helps.
There always a risk with caching writes but if there's a battery there it is
minimised.
If you could get the budget to get an extra pair of disks as a mirror for
the log you should get a decent benefit.
Even a pair of IDE/SATA with NT s/w mirroring would be better than nothing.
Try pointing out the the purse holders that if a disk failed on raid 5 the
performance while running in phantom mode would probably bring the machine
to it's knees.
Paul|||I agree - I would prefer to just delete them if there's no benefit to having
them, but wanted to verify it first.
I probably should have gone into a little more detail about how the database
is used. We do a bulk insert once a day and the rest of the time it's used
for reads only, so the transaction logs aren't much of a factor in this case.
The tables are rather large and the only reason to keep the extra files
would be if it would help speed up queries. The server has a dual core
processor if that makes any difference. This is SQL Server 2000 Standard
Edition running on a Windows 2003 server.
We have new servers and lots of hard drives on order, so I just need to
tread water for a little longer.
Thanks,
Hari
"Paul Cahill" wrote:
> I would have thought you would be better off using emptyfile and dropping
> them.
> I can't think of a benefit.
> Raid 5 is generally slow on writes and will probably be hurting your logfile
> most.
> If you are using a battery backed up raid controller you could try
> dedicating the cache 100% to writes and see if that helps.
> There always a risk with caching writes but if there's a battery there it is
> minimised.
> If you could get the budget to get an extra pair of disks as a mirror for
> the log you should get a decent benefit.
> Even a pair of IDE/SATA with NT s/w mirroring would be better than nothing.
> Try pointing out the the purse holders that if a disk failed on raid 5 the
> performance while running in phantom mode would probably bring the machine
> to it's knees.
> Paul
>
>|||It's query tuning and sneaking some extra memory in till then. Bear in mind
that some of your queries may be writing to tempdb.
We keep our tempdb on separate spindles. From what I have read, 2005 makes
much heavier use of tempdb especially with the new isolation levels (Row
level versioning).
Interesting little article by Tony Rogerson.
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/08/24/958.aspx|||I agree with Paul in that for your current situation you may be better off
dropping those files altogether.
--
Andrew J. Kelly SQL MVP
"Hari Seldon" <HariSeldon@.discussions.microsoft.com> wrote in message
news:C12EE151-5272-4697-B057-E773C64AADDD@.microsoft.com...
>I agree - I would prefer to just delete them if there's no benefit to
>having
> them, but wanted to verify it first.
> I probably should have gone into a little more detail about how the
> database
> is used. We do a bulk insert once a day and the rest of the time it's
> used
> for reads only, so the transaction logs aren't much of a factor in this
> case.
> The tables are rather large and the only reason to keep the extra files
> would be if it would help speed up queries. The server has a dual core
> processor if that makes any difference. This is SQL Server 2000 Standard
> Edition running on a Windows 2003 server.
> We have new servers and lots of hard drives on order, so I just need to
> tread water for a little longer.
> Thanks,
> Hari
> "Paul Cahill" wrote:
>> I would have thought you would be better off using emptyfile and dropping
>> them.
>> I can't think of a benefit.
>> Raid 5 is generally slow on writes and will probably be hurting your
>> logfile
>> most.
>> If you are using a battery backed up raid controller you could try
>> dedicating the cache 100% to writes and see if that helps.
>> There always a risk with caching writes but if there's a battery there it
>> is
>> minimised.
>> If you could get the budget to get an extra pair of disks as a mirror for
>> the log you should get a decent benefit.
>> Even a pair of IDE/SATA with NT s/w mirroring would be better than
>> nothing.
>> Try pointing out the the purse holders that if a disk failed on raid 5
>> the
>> performance while running in phantom mode would probably bring the
>> machine
>> to it's knees.
>> Paul
>>|||That's what I'm going to do. Thanks for the tips!
Hari
"Andrew J. Kelly" wrote:
> I agree with Paul in that for your current situation you may be better off
> dropping those files altogether.
> --
> Andrew J. Kelly SQL MVP
> "Hari Seldon" <HariSeldon@.discussions.microsoft.com> wrote in message
> news:C12EE151-5272-4697-B057-E773C64AADDD@.microsoft.com...
> >I agree - I would prefer to just delete them if there's no benefit to
> >having
> > them, but wanted to verify it first.
> >
> > I probably should have gone into a little more detail about how the
> > database
> > is used. We do a bulk insert once a day and the rest of the time it's
> > used
> > for reads only, so the transaction logs aren't much of a factor in this
> > case.
> > The tables are rather large and the only reason to keep the extra files
> > would be if it would help speed up queries. The server has a dual core
> > processor if that makes any difference. This is SQL Server 2000 Standard
> > Edition running on a Windows 2003 server.
> >
> > We have new servers and lots of hard drives on order, so I just need to
> > tread water for a little longer.
> >
> > Thanks,
> > Hari
> >
> > "Paul Cahill" wrote:
> >
> >> I would have thought you would be better off using emptyfile and dropping
> >> them.
> >> I can't think of a benefit.
> >>
> >> Raid 5 is generally slow on writes and will probably be hurting your
> >> logfile
> >> most.
> >> If you are using a battery backed up raid controller you could try
> >> dedicating the cache 100% to writes and see if that helps.
> >> There always a risk with caching writes but if there's a battery there it
> >> is
> >> minimised.
> >>
> >> If you could get the budget to get an extra pair of disks as a mirror for
> >> the log you should get a decent benefit.
> >> Even a pair of IDE/SATA with NT s/w mirroring would be better than
> >> nothing.
> >>
> >> Try pointing out the the purse holders that if a disk failed on raid 5
> >> the
> >> performance while running in phantom mode would probably bring the
> >> machine
> >> to it's knees.
> >>
> >> Paul
> >>
> >>
> >>
>
>

Monday, February 13, 2012

Adding Column to table

I'm writing a procedure to add column to the database but need to check if
the column already exist.
How does one check to see if it exist. The reason is that if I have multiple
statements adding columns then it seem that the procedure will error out
and not go to the second statement. So if rowguid already exits in the table
user, it will not do the next statement.
Thanks for your help
Stephen K. Miyasato
Alter table users ADD [rowguid] uniqueidentifier ROWGUIDCOL NOT NULL
CONSTRAINT [DF__Users__rowguid__4C220BCC] DEFAULT (newid())
Alter table users ADD User_Type smallIntHere's one method:
IF COLUMNPROPERTY(OBJECT_ID('users'),'rowgu
id','AllowsNull') IS NULL
ALTER TABLE users ADD rowguid UNIQUEIDENTIFIER ROWGUIDCOL NOT NULL
CONSTRAINT [df_users_rowguid] DEFAULT (NEWID())
However, I don't recommend you do this in a stored procedure. One problem is
that you will still get errors if you reference the column in static code in
the same proc. That's because the column name has to be resolvable at
compile time and not just when a statement is executed. Another issue is the
disproportionate effort required to test, debug and maintain systems that
modify schema at runtime. Schema mods should happen at install time so I
can't think of many good reasons to do this in a proc.
David Portas
SQL Server MVP
--
"Stephen K. Miyasato" <miyasat@.flex.com> wrote in message
news:eJtl9toxFHA.900@.TK2MSFTNGP11.phx.gbl...
> I'm writing a procedure to add column to the database but need to check if
> the column already exist.
> How does one check to see if it exist. The reason is that if I have
> multiple statements adding columns then it seem that the procedure will
> error out and not go to the second statement. So if rowguid already exits
> in the table user, it will not do the next statement.
> Thanks for your help
> Stephen K. Miyasato
>
> Alter table users ADD [rowguid] uniqueidentifier ROWGUIDCOL NOT NULL
> CONSTRAINT [DF__Users__rowguid__4C220BCC] DEFAULT (newid())
> Alter table users ADD User_Type smallInt
>