Showing posts with label tables. Show all posts
Showing posts with label tables. Show all posts

Thursday, March 29, 2012

Adjusting column width in queried dataset

I am trying to eliminate the extra space in columns that have been returned from a DB query...the tables have character lengths of 40, I would like to return just the necessary characters.

Thanks in advance.

Mark.

Did you try rtrim(ltrim(columnname) in the query?

cheers,

Andrew

|||

Thanks Andrew,

rtrim(columnname) did it.

Cheers,

Mark.

AdHoc reporting against SQL 2K w/meta data?

Okay... We have a SQL2K database that has about 500 tables or so. It is normalized to a reasonable level and enforces all relationships with PK/FKs, not triggers. Hence, for a database-minded person it is fairly easy to read (as easy as a 500+ table database can be!).

Our users need adhoc query capabilities. Our report writer is simply overwhelmed. He doesn't need to be spending time writing a report that is intended to be run once.

I expect the best alternative would be to use some sort of adhoc reporting tool that is based off meta data. We (the DBAs) could be responsible for maintaining the meta data and STILL have a manhour savings over developing all these reports.

Here's the catch... We are on a TIGHT budget (aerospace industry is still reeling a bit). Is anyone using a product or aware of a product that might be just the ticket for us? We have been investigating a product by LogiXML called LGX AdHoc (http://www.logixml.com/products/AdHoc/adhoc.htm). Looks promising. Anyone use or familar with it?Can you describe what it does (I didn't find their eval)? I am using ActiveReport from DataDynamics, but it does require some knowledge and time to spend designing reports.sql

Tuesday, March 27, 2012

Adhoc report with join tables on

Does anyone know if user has the capability to join two entities with
additional field which is not a predefined primary key or foreign key at the
design time? User want to run the report by join tables with certain fields
on the fly. I have not found a way to allow them to do that unless they have
Microsoft visual studio 2005 or SQL server business intelligence development
studio installed. Is SQL server reporting service a right candidate to serve
user's request? Does anyone know any other tool with this capability?
Thanks!1. You can do this if you define the sql statement dynamically and allow the
user parameter choices that provide the ability to specify the tables and
joins expressions. (for how to define the sql statement dynamically,
building it up as an expression, just treat the command as an expression
like you would the expression to display in a text box. IOW, start it with
an = sign and build up the string or invoke a code function)
However, I don't think it's really the best way.
2. Does the user have access and understanding to create views on the
server? It might be best to define the report based on a view, and have the
report basically remain ignorant of the joins and table information. The
report would have a single parameter -- the name of the view to invoke --
and would send that information to a stored procedure which would validate
that the view exists and has appropriate columns, and then run the view or
error-handle as described in choice #3 below.
3. If the user does not have that ability or access, I think I would build
this report to run a stored procedure, passing the parameter information as
described in #1, and have the stored procedure built and execute the sql. I
could do better validation in the stored procedure (for example, validate
that the tables and fields chosen by the user actually exist, assuming these
elements cannot be a dropdown in the report interface). I would have the
sproc send back a default data set of one record with every item showing
appropriate error text (or something) if I couldn't handle it another way.
Basically I think choice #2 is the right way to go here and if the user
doesn't have that ability and access I'm wondering whether that user should
be specifying this information at all...
>L<
"Daisy" <diyfan@.msnews.group.post> wrote in message
news:DE6C8C52-FBE3-48D4-9EB6-F37BBD8C40E4@.microsoft.com...
> Does anyone know if user has the capability to join two entities with
> additional field which is not a predefined primary key or foreign key at
> the
> design time? User want to run the report by join tables with certain
> fields
> on the fly. I have not found a way to allow them to do that unless they
> have
> Microsoft visual studio 2005 or SQL server business intelligence
> development
> studio installed. Is SQL server reporting service a right candidate to
> serve
> user's request? Does anyone know any other tool with this capability?
> Thanks!|||Thank you very much for the idea! I had built the model with report builder
to let user chose any fields from the tables given for the report. Now user
wanted to join the table not based on the primary key field that specified in
the design time. They wanted to join the table by certain non key fields at
the run time. I was stucked. I was only thinking use report builder model to
let user do this. And the report model could not change the key field (join
relation) on the fly. I have not found anything online or in MSDN library
telling me how to define the key at run time. As you suggested by using the
reprot designer and the sql stored procedure it is feasible to achieve the
task. Hope I did not misunderstood your post. If so please let me know.
Thanks!
"Lisa Slater Nicholls" wrote:
> 1. You can do this if you define the sql statement dynamically and allow the
> user parameter choices that provide the ability to specify the tables and
> joins expressions. (for how to define the sql statement dynamically,
> building it up as an expression, just treat the command as an expression
> like you would the expression to display in a text box. IOW, start it with
> an = sign and build up the string or invoke a code function)
> However, I don't think it's really the best way.
> 2. Does the user have access and understanding to create views on the
> server? It might be best to define the report based on a view, and have the
> report basically remain ignorant of the joins and table information. The
> report would have a single parameter -- the name of the view to invoke --
> and would send that information to a stored procedure which would validate
> that the view exists and has appropriate columns, and then run the view or
> error-handle as described in choice #3 below.
> 3. If the user does not have that ability or access, I think I would build
> this report to run a stored procedure, passing the parameter information as
> described in #1, and have the stored procedure built and execute the sql. I
> could do better validation in the stored procedure (for example, validate
> that the tables and fields chosen by the user actually exist, assuming these
> elements cannot be a dropdown in the report interface). I would have the
> sproc send back a default data set of one record with every item showing
> appropriate error text (or something) if I couldn't handle it another way.
> Basically I think choice #2 is the right way to go here and if the user
> doesn't have that ability and access I'm wondering whether that user should
> be specifying this information at all...
> >L<
>
> "Daisy" <diyfan@.msnews.group.post> wrote in message
> news:DE6C8C52-FBE3-48D4-9EB6-F37BBD8C40E4@.microsoft.com...
> > Does anyone know if user has the capability to join two entities with
> > additional field which is not a predefined primary key or foreign key at
> > the
> > design time? User want to run the report by join tables with certain
> > fields
> > on the fly. I have not found a way to allow them to do that unless they
> > have
> > Microsoft visual studio 2005 or SQL server business intelligence
> > development
> > studio installed. Is SQL server reporting service a right candidate to
> > serve
> > user's request? Does anyone know any other tool with this capability?
> >
> > Thanks!
>

Addtion of Table in merge replication

Hi,
I am dealing with merge replication (104 tables/database).
I need to add some new tables in replication. There are two options
1. Stop the replication and publish all tables including new tables then
again set the replication.
2. Create a new subscription for these new tables alone
Advise me, Which one is better?
Thanks,
Soura.
SouRa,
you can just add the new table to the existing publication.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||This will cause an complete snapshot to be generated and distributed. A new
publication might be the answer.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Paul Ibison" <Paul.Ibison@.Pygmalion.Com> wrote in message
news:uCxBq%23hsFHA.1168@.TK2MSFTNGP11.phx.gbl...
> SouRa,
> you can just add the new table to the existing publication.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
|||AFAIR, although the new snapshot will include all tables, only the new table
will get distributed.
Cheers,
Paul
|||Hi,
I think u can add additional articles by using the System SP
sp_addmergearticle .
After you've added it, you run the snapshot agent. This will generate a
complete snapshot but only the new article will be propagated by the merge
agent.
If ur having the MR option as NotSync then have this table exists in
Subscriber also.
Once u tested in test server, u can implement in production.
regards,
Herbert
"SouRa" wrote:

> Hi,
> I am dealing with merge replication (104 tables/database).
> I need to add some new tables in replication. There are two options
> 1. Stop the replication and publish all tables including new tables then
> again set the replication.
> 2. Create a new subscription for these new tables alone
> Advise me, Which one is better?
> Thanks,
> Soura.
>

Sunday, March 25, 2012

additional data files not filing

We have a quad sql server that runs OLTP transactions at the rate of
100's per second (read & Write).

We used to have all the tables on 1 file but started to notice high contention on this file. We added 3 more files to match the processor number. The problem is that the 3 additional files are not filling with data. Does anyone know why this happens or can reccommend a fix?
--
willVerify that the newly added files are in the appropriate file group (probably PRIMARY) and are of the appropriate file type (ie, they were created as data files, not log files). Also, your new file(s) may not get data written to them if the old file still has space available.

Your logic for reducing contention by adding files to match the number of processors does not make sense. Contention is caused by physical IO going to the same physical disk and getting backed up because the pipe to the physical disk is too small. You would add files to a database filegroup to ease contention by creating the files on separate physical disks. It has no relation the number of CPUs.

Regards,

hmscott|||Yes they are all the same file type (data files).

Microsoft claims that you should have a seperate file per processor for high contention OLTP sql servers. Parallelism is the goal here. Sql server can only access 1 file per cpu at any one time. If it was a dual processor machine It would have less impact considering the OS & tempdb usually have something going on in parallel.

Considering we have a quad sql server = 4 files per file group.

As an FYI we are running a raid 10 with 14 disks.|||Hmm, not doubting you, but could you send a link with the MS info? I had not seen that before.

As for your issue, what's the state of the original data file? Full, nearly full or not even close? Also, can you verify that the new files you added were updateable (ie, not read-only)? Beyond that, I would try testing some scenarios out on a development server somewhere.

Regards,

hmscott

Thursday, March 22, 2012

adding user-defined data type to existing publication/subscription

Hello, I am not a sql server programmer.
I have a database which already has a publication set up, which sends a
bunch of tables to subscribers. Everything was okay, but now I need to
publish two more tables. These tables contain columns which are user
defined types, which means my subscribers do not have them defined on
their side.
When I try to start synchronizing I get an error: Column or parameter
#1: Cannot find data type u_Blah. (error number 2715)
which is I guess, fair enough since the tables is trying to call
something that doesn't exist for the subscribers yet. If I understand
MSDN correctly (I tried reading so many articles...) in the Publication
--> Properties --> Articles tab; then, when I go into the properties
of the new articles, I can specify "Convert User Defined To base data
types", and then the subscribers will be able to pick up the user
defined types just fine. However (PLEASE tell me I understood this
wrong) that only works when publication is being set up the very first
time. That, in my situation, because I'm only adding articles, I cannot
do that.
The solution in MSDN seems to be to "manually copy the type (and the
associated assembly for a user-defined type) to each subscriber".
(msdn2.microsoft.com/en-us/library/ms152559.aspx) I have no idea what
this means. Can anyone shed some light on what must be done now?
Thank you
-Dorothy
This is definable at the article level, and if you select to use the base
types for your new articles, there should be no requirement to have them
exist on the subscriber. If you want to continue to use user-defined data
types, you can create a script to add them (using sp_addtype and operating
in the correct database ie a 'USE' statement at the top) and then run this
script on the subscriber. How to do this? There are several options:
(a) manually open up Query Analyser and connect to the subscriber and then
run there
(b) use a pre-snapshot script on the publication - not for you as you are
just adding articles to an initialized publication
(c) use sp_addscriptexec, which will run the script on the subscriber the
next time it synchronizes
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com .
|||Just to clarify the current situation:
1) SQL2005 snapshot processing can handle column dependencies on
user-defined data types (both TSQL and CLR) by including the necessary
type\assembly definitions as part of the snapshot. There should be no need
to manually create the necessary types at the subscriber ahead of time using
a pre-snapshot script anymore as folks typically need to do on SQL2000.
2) SQL2005 DDL replication currently lacks capability to figure out new data
type dependencies introduced by a DDL action (such as adding a new column
depending on a UDT that doesn't exist at the subscriber as you attempted
below) As such, you would need to implement some other mechanisms
(sp_addexecscript being one) to bring the necessary data type (+assembly)
definition over to the subscriber before the DDL action is replicated.
-Raymond
<dorothyy@.eyi.us> wrote in message
news:1159311634.662309.166020@.d34g2000cwd.googlegr oups.com...
> Hello, I am not a sql server programmer.
> I have a database which already has a publication set up, which sends a
> bunch of tables to subscribers. Everything was okay, but now I need to
> publish two more tables. These tables contain columns which are user
> defined types, which means my subscribers do not have them defined on
> their side.
> When I try to start synchronizing I get an error: Column or parameter
> #1: Cannot find data type u_Blah. (error number 2715)
> which is I guess, fair enough since the tables is trying to call
> something that doesn't exist for the subscribers yet. If I understand
> MSDN correctly (I tried reading so many articles...) in the Publication
> --> Properties --> Articles tab; then, when I go into the properties
> of the new articles, I can specify "Convert User Defined To base data
> types", and then the subscribers will be able to pick up the user
> defined types just fine. However (PLEASE tell me I understood this
> wrong) that only works when publication is being set up the very first
> time. That, in my situation, because I'm only adding articles, I cannot
> do that.
> The solution in MSDN seems to be to "manually copy the type (and the
> associated assembly for a user-defined type) to each subscriber".
> (msdn2.microsoft.com/en-us/library/ms152559.aspx) I have no idea what
> this means. Can anyone shed some light on what must be done now?
> Thank you
> -Dorothy
>
|||Hello,
thread has been moved to here mysteriously.
http://groups.google.com/group/micro...65417554cc611c
thank you
Raymond Mak [MSFT] wrote:[vbcol=seagreen]
> Just to clarify the current situation:
> 1) SQL2005 snapshot processing can handle column dependencies on
> user-defined data types (both TSQL and CLR) by including the necessary
> type\assembly definitions as part of the snapshot. There should be no need
> to manually create the necessary types at the subscriber ahead of time using
> a pre-snapshot script anymore as folks typically need to do on SQL2000.
> 2) SQL2005 DDL replication currently lacks capability to figure out new data
> type dependencies introduced by a DDL action (such as adding a new column
> depending on a UDT that doesn't exist at the subscriber as you attempted
> below) As such, you would need to implement some other mechanisms
> (sp_addexecscript being one) to bring the necessary data type (+assembly)
> definition over to the subscriber before the DDL action is replicated.
> -Raymond
> <dorothyy@.eyi.us> wrote in message
> news:1159311634.662309.166020@.d34g2000cwd.googlegr oups.com...

Adding transaction causes connection failure?

I have some tables that I need to wipeout and reload once per day. So I have created a T-SQL task that does a truncate table, then a data flow task, then a update statistics task. Running those works fine, but as soon as I put them all into a sequence container and set the container to require transactions, the first step gets a connection failure. "Failed to acquire connection "<name>". Connection may not be configured correctly or you may not have the right permissions on this connection". I am using a .Net SQLClient data provider for the T-SQL task and in the data flow task I have to use a OLEDB provider so that I can run the task locally in development.

Is there something I am doing wrong or is there some other way to handle truncate, load, update stats all in a transaction?

Thanks.

Tim

Unfortunately, the ADO.NET Connection Manager doesn't currently support DTC transaction enlistment, so the sequence container can't coordinate one transaction across your three tasks. I believe your best workaround (and one which will keep all the activity on one connection, which seems preferable in your case anyway), would be to use just the OLE DB connection.

To do this, you'll need to turn your Execute T-SQL task into a plain Execute SQL task (your TRUNCATE TABLE will work fine there). Next, create an Execute SQL Task to run your UPDATE STATISTICS statement (use the "View T-SQL" option in your Update Statistics task editor to see the SQL you'll want to run). You'll then be able to get rid of the ADO.NET connection manager, and your Sequence Container will be able to properly coordinate the transaction across the three operations.

Hope this helps!

|||That didn't work. I still get the failed to acquire connection error.|||

After further research I found that it was because the DTC was not set up to allow inbound transactions on the server. The config settings in question are described here:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=230390&SiteID=1

|||So fixing the DTC config helped me get through the first hurdle of connection failures, but now, it just gets stuck in the validation phase of my data flow.|||

Interesting. I just saw this myself, and what's happenning is that the execution of the TRUNCATE in a transaction appears to cause DTC to obtain a Schema Modification lock (LCK_M_SCH_M). This in turn blocks your destination's ability to fetch metadata for your destination, if ValidateExternalMetadata is True on your destination. Try disabling the destination's ValidateExternalMetadata.

-David

|||

That fixed it. I now have all the tasks running in a transaction. It seems odd that the truncate would cause a lock that doesn't allow reads of schema data. Thanks for your help.

Tim

|||I have the same problem.

I have a Sequence Container with 2 sql task and 2 Dataflow task inside.

Transaction option :
On the package : Supported On the sequence container : Required On the 4 task inside the container : Supported

Adding transaction causes connection failure?

I have some tables that I need to wipeout and reload once per day. So I have created a T-SQL task that does a truncate table, then a data flow task, then a update statistics task. Running those works fine, but as soon as I put them all into a sequence container and set the container to require transactions, the first step gets a connection failure. "Failed to acquire connection "<name>". Connection may not be configured correctly or you may not have the right permissions on this connection". I am using a .Net SQLClient data provider for the T-SQL task and in the data flow task I have to use a OLEDB provider so that I can run the task locally in development.

Is there something I am doing wrong or is there some other way to handle truncate, load, update stats all in a transaction?

Thanks.

Tim

Unfortunately, the ADO.NET Connection Manager doesn't currently support DTC transaction enlistment, so the sequence container can't coordinate one transaction across your three tasks. I believe your best workaround (and one which will keep all the activity on one connection, which seems preferable in your case anyway), would be to use just the OLE DB connection.

To do this, you'll need to turn your Execute T-SQL task into a plain Execute SQL task (your TRUNCATE TABLE will work fine there). Next, create an Execute SQL Task to run your UPDATE STATISTICS statement (use the "View T-SQL" option in your Update Statistics task editor to see the SQL you'll want to run). You'll then be able to get rid of the ADO.NET connection manager, and your Sequence Container will be able to properly coordinate the transaction across the three operations.

Hope this helps!

|||That didn't work. I still get the failed to acquire connection error.|||

After further research I found that it was because the DTC was not set up to allow inbound transactions on the server. The config settings in question are described here:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=230390&SiteID=1

|||So fixing the DTC config helped me get through the first hurdle of connection failures, but now, it just gets stuck in the validation phase of my data flow.|||

Interesting. I just saw this myself, and what's happenning is that the execution of the TRUNCATE in a transaction appears to cause DTC to obtain a Schema Modification lock (LCK_M_SCH_M). This in turn blocks your destination's ability to fetch metadata for your destination, if ValidateExternalMetadata is True on your destination. Try disabling the destination's ValidateExternalMetadata.

-David

|||

That fixed it. I now have all the tasks running in a transaction. It seems odd that the truncate would cause a lock that doesn't allow reads of schema data. Thanks for your help.

Tim

|||I have the same problem.

I have a Sequence Container with 2 sql task and 2 Dataflow task inside.

Transaction option :
On the package : Supported On the sequence container : Required On the 4 task inside the container : Supportedsql

Adding transaction causes connection failure?

I have some tables that I need to wipeout and reload once per day. So I have created a T-SQL task that does a truncate table, then a data flow task, then a update statistics task. Running those works fine, but as soon as I put them all into a sequence container and set the container to require transactions, the first step gets a connection failure. "Failed to acquire connection "<name>". Connection may not be configured correctly or you may not have the right permissions on this connection". I am using a .Net SQLClient data provider for the T-SQL task and in the data flow task I have to use a OLEDB provider so that I can run the task locally in development.

Is there something I am doing wrong or is there some other way to handle truncate, load, update stats all in a transaction?

Thanks.

Tim

Unfortunately, the ADO.NET Connection Manager doesn't currently support DTC transaction enlistment, so the sequence container can't coordinate one transaction across your three tasks. I believe your best workaround (and one which will keep all the activity on one connection, which seems preferable in your case anyway), would be to use just the OLE DB connection.

To do this, you'll need to turn your Execute T-SQL task into a plain Execute SQL task (your TRUNCATE TABLE will work fine there). Next, create an Execute SQL Task to run your UPDATE STATISTICS statement (use the "View T-SQL" option in your Update Statistics task editor to see the SQL you'll want to run). You'll then be able to get rid of the ADO.NET connection manager, and your Sequence Container will be able to properly coordinate the transaction across the three operations.

Hope this helps!

|||That didn't work. I still get the failed to acquire connection error.|||

After further research I found that it was because the DTC was not set up to allow inbound transactions on the server. The config settings in question are described here:

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=230390&SiteID=1

|||So fixing the DTC config helped me get through the first hurdle of connection failures, but now, it just gets stuck in the validation phase of my data flow.|||

Interesting. I just saw this myself, and what's happenning is that the execution of the TRUNCATE in a transaction appears to cause DTC to obtain a Schema Modification lock (LCK_M_SCH_M). This in turn blocks your destination's ability to fetch metadata for your destination, if ValidateExternalMetadata is True on your destination. Try disabling the destination's ValidateExternalMetadata.

-David

|||

That fixed it. I now have all the tasks running in a transaction. It seems odd that the truncate would cause a lock that doesn't allow reads of schema data. Thanks for your help.

Tim

|||I have the same problem.

I have a Sequence Container with 2 sql task and 2 Dataflow task inside.

Transaction option :
On the package : Supported On the sequence container : Required On the 4 task inside the container : Supported

Adding to 2 different tables

Hi Everyone,

I have a page with a textbox and a dropdown list.
The user will enter a company name in the text box and select a number from 1 - 20 (number of delegates for that company) in the dropdown list.

I've got the text box and dropdown writing to tblCompany but I would also like it to write to tblUsers at the same time. The reason for this is that i need it to set up the number of users that have been selected in the dropdown list.

Here is the codebehind file:

Imports System.Data.SqlClientImports System.Web.ConfigurationPartialClass cms_Management_Company_NewCompanyInherits System.Web.UI.PageDim companyNameAs String Dim companyActiveAs Boolean Dim companyArchivedAs Boolean Dim companyDelegatesAs Integer Dim userForeNameAs String Dim userSurnameAs String Dim userEmailAs String Dim userUsernameAs String Dim userPasswordAs String Dim userActiveAs Boolean Dim userTypeIDAs Integer Dim companyIDAs Integer Dim iAs Integer Dim NoLoopsAs Integer Protected Sub btnSave_Click(ByVal senderAs Object,ByVal eAs System.Web.UI.ImageClickEventArgs)Handles btnSave.ClickDim conStringAs String = WebConfigurationManager.ConnectionStrings("General").ConnectionStringDim conAs New SqlConnection(conString)Dim cmdAs New SqlCommand("INSERT INTO tblCompany (CompanyName, CompanyActive, CompanyArchived, CompanyDelegates) VALUES (@.CompanyName, @.CompanyActive, @.CompanyArchived, @.CompanyDelegates)", con) cmd.Parameters.AddWithValue("@.CompanyName", companyName) cmd.Parameters.Item("@.CompanyName").Value = txtCompanyName.Text cmd.Parameters.AddWithValue("@.CompanyDelegates", companyDelegates) cmd.Parameters.Item("@.CompanyDelegates").Value = lstDel.SelectedValue cmd.Parameters.AddWithValue("@.CompanyActive", companyActive) cmd.Parameters.Item("@.CompanyActive").Value =True cmd.Parameters.AddWithValue("@.CompanyArchived", companyArchived) cmd.Parameters.Item("@.CompanyArchived").Value =False Using con con.Open() cmd.ExecuteNonQuery() con.Close()End UsingDim con2As New SqlConnection(conString)Dim cmd2As New SqlCommand("INSERT INTO tblUsers (UserForeName, UserSurname, UserEmail, UserUsername, UserPassword, UserActive, UserTypeID, CompanyID) VALUES (@.UserForeName, @.UserSurname, @.UserEmail, @.UserUsername, @.UserPassword, @.UserActive, @.UserTypeID, @.CompanyID)", con2) cmd2.Parameters.AddWithValue("@.UserForeName", userForeName) cmd2.Parameters.Item("@.UserForeName").Value ="First Name - Delegate 1" cmd2.Parameters.AddWithValue("@.UserSurname", userSurname) cmd2.Parameters.Item("@.UserSurname").Value ="Surname - Delegate 1" cmd2.Parameters.AddWithValue("@.UserEmail", userEmail) cmd2.Parameters.Item("@.UserEmail").Value ="Email Address - Delegate 1" cmd2.Parameters.AddWithValue("@.UserUsername", userUsername) cmd2.Parameters.Item("@.UserUsername").Value ="Username - Delegate 1" cmd2.Parameters.AddWithValue("@.UserPassword", userPassword) cmd2.Parameters.Item("@.UserPassword").Value ="Password - Delegate 1" cmd2.Parameters.AddWithValue("@.UserActive", userActive) cmd2.Parameters.Item("@.UserActive").Value =True cmd2.Parameters.AddWithValue("@.UserTypeID", userTypeID) cmd2.Parameters.Item("@.UserTypeID").Value = 2 cmd2.Parameters.AddWithValue("@.UserTypeID", userTypeID) cmd2.Parameters.Item("@.UserTypeID").Value = 1 Using con2 con2.Open()For i = 1To NoLoops cmd2.ExecuteNonQuery()Next i con2.Close()End Using Response.Redirect("~/cms/Management/Company/Company.aspx")End SubEnd Class
The other thing I am not sure of is getting the ID of the new company and assiging it to the delegates in tblUsers (to associate them with the new company)
I hope this makes sense.
Thank you very much guys.
Scott.

Hi,

To get the recently added record's ID use

SELECT SCOPE_IDENTITY()

and catch the returned value using

cmd.ExecuteScalar()
 
HTH
Regards

|||

Hi,

Thanks for the reply, where in the code would I put these elements? I am very new to .NET.

thanks again,

Scott.

Tuesday, March 20, 2012

Adding tables to SQL server database in visual studio?

Hi

I keep reading it's possible to add, amend ,etc. tables in visual studio but to do so I need access to the features that allow this.
I can connect to the database but I haven't been able to use any of these database features since they are "blanked" out.
Obviously, I can do all this at the server but ideally I would like to do it remotely.

Any suggestions?
Thanks in advance.In your server explorer, can you navigate to the tables?
-Servers
- <your server>
- SQL Servers
- <your SQL server>
- <your database>
- tables

From there you should be able to add, change, or delete tables.

-Ian|||Hi Ian

Thanks for the reply.

I can navigate to the tables but the menu features are not available for adding,etc. tables.
Could it be something at the server end that I need to change?

Thanks in advance.sql

Adding table to a FileGroup

I have a Database called Products with 3 big tables Rims, Tires, and Stock
my database is as follows.
Products.mdf and Products.ldf
How can I create file groups for the tables Rims, Tires, and Stock and
attach these tables?
Example:
Products.mdf
Rims.ndf
Tires.ndf
Stock.ndfExample:
ALTER DATABASE Products ADD FILEGROUP stock_filegroup
ALTER DATABASE Products ADD FILE
(NAME = 'stock01',
FILENAME = 'C:\MSSQL\stock01.ndf',
SIZE = 10,
MAXSIZE = 20)
TO FILEGROUP stock_filegroup
The easiest way to move an existing table to another filegroup is to create
or re-create a clustered index:
CREATE CLUSTERED INDEX idx_stock ON Stock (x) ON stock_filegroup
For a non-clustered table you will have to re-create the table.
Typically it's only useful to create separate filegroups if you place the
files on separate physical devices or arrays. I assume that is what you
intend by placing your tables in this way.
David Portas
SQL Server MVP
--

Monday, March 19, 2012

adding sql values from 2 tables

Hello all, i have 2 sql tables, and they each contain a column with bigint values. What i want to do is add up the values from both table, and then display it as 1 number. Supposing i have the following

table name: DownloadTable

fileName |DownloadSize|

file1 | 45

file2 | 50

file3 | 20

--------------

second table

table name: VideoTable

fileName | VideoSize |

file1 | 40

file2 | 60

file3 | 20

------------

Now, i want this to output 120, wich is the sum of the sum of the values of table 1 and 2 I've already tried the following:

SELECT SUM(DownloadTable.DownloadSize) + SUM(VideoTable.VideoSize) FROM DownloadTable, VideoTable

However, when i ran this, it gave me a huge number, that was far from accurate from what it was suppose to output. Can anyone help me?

Thanks in advance

Regards,

What is the expected output?|||in my real applicaion, the expected value was 3280. However, i got 37624.|||Based on the values you provided how do you arrive at 3280? Is there any business-forumula?|||

well thats the thing. I dont know what i need to do to arrive at the expected value. And when i try to retrieve the sum of the tables indiidually, it works, and i get the expected output. But when it comes time to add up the values of those 2 tables, i get into trouble.

And what is a business-formula?

Thanks for your replies

|||

hervens:

Hello all, i have 2 sql tables, and they each contain a column with bigint values. What i want to do is add up the values from both table, and then display it as 1 number. Supposing i have the following

table name: DownloadTable

fileName |DownloadSize|

file1 | 45

file2 | 50

file3 | 20

--------------

second table

table name: VideoTable

fileName | VideoSize |

file1 | 40

file2 | 60

file3 | 20

------------

Now, i want this to output 120, wich is the sum of the sum of the values of table 1 and 2 I've already tried the following:

SELECT SUM(DownloadTable.DownloadSize) + SUM(VideoTable.VideoSize) FROM DownloadTable, VideoTable

However, when i ran this, it gave me a huge number, that was far from accurate from what it was suppose to output. Can anyone help me?

Is that your exact SQL statement? You've not specified which columns to join on, so your result set contains a row for every possible combination of every column in each table.

Instead, try something like this:

DECLARE @.Total BIGINT

SELECT @.Total = SUM(DownloadSize) FROM DownLoadTable
SELECT @.Total = @.Total + (SELECT SUM(VideoSize) FROM VideoTable)

|||

Hello tmorton, thx for your reply. However, when i pressed "test query" button in the dialog box to try to run it, the following error message poped up in a message box.

"The variable name "@.Total" has already been declared. Variable names must be unique within a query batch or stored procedure"

I also tried modifying the variable name total, but got the same error.

Im really sorry about this, im not much of an sql programmer. Just read a tutorial or 2 about it. Oh, and i copied the code you provided exactly

|||Try This Out......

select t = sum(DownloadSize) + (select sum(VideoSize) from VideoTable) from DownloadSize

I think this should work for u.|||Wow, thank a lot Dhaliwal. Everything is working perfectly right now.

Sunday, March 11, 2012

Adding Query Results To A Tbale

hello everybody,

I have a small problem and was hipeing someone could help. Want to run a query, where I sellect results from two tables. Once this is done I want to takes those results and update a table that is already created with those results. How would I do this? Thw eonly way I know how to do this is by writing this:

select CompanyName, ContactName, Address, City, Region, ProductName, UnitPrice INTO TABLE2
From Suppliers, Products

But the problem with this is that I can only input the query results into a new table that I would name above. I can't insert it into a table that has already been created.

This is the erroe that I receive:
There is already an object named 'TABLE2' in the database.

Please Help!

thanksINSERT INTO TABLE2 (
CompanyName
, ContactName
, Address
, City
, Region
, ProductName
, UnitPrice)
SELECT
CompanyName
, ContactName
, Address
, City
, Region
, ProductName
, UnitPrice
From Suppliers, Products

I'd also say you're missing a JOIN|||So say that information changes on the 2 tables that I am querying, when I run this insert code with the query, it will reinsert all the records into table2, which will basically be updating it. Am i getting this right? Thats what I am trying to do. I could also turn that into a job so that it could run automatically right?

thanks|||Inserting will add records to the table unless you have constraints that prevent duplicate records/keys. Updating modifies the original field(s) of the record you are working with. Are you new to database design ?|||Can you elaborate what you are trying to do ?|||Originally posted by rnealejr
Inserting will add records to the table unless you have constraints that prevent duplicate records/keys. Updating modifies the original field(s) of the record you are working with. Are you new to database design ?

Actually yes, I am new to this. Thats probably why I sound redundante. So I need as much help as possibel

thanks|||Originally posted by rnealejr
Can you elaborate what you are trying to do ?

Okay, here it goes. I created a table called "table1". This table has all the columns named already. I want to hold records from a a query that I run on two other tables and then place the results in table2. The colums that I query are named the same as the columns I created in table2. I just want all those records to show up in table2.|||Aren't you planning on using the code I posted?|||Originally posted by Brett Kaiser
Aren't you planning on using the code I posted?
ACTUALLY i USED IT AND IT WORKED WELL. i WAS JUST WONDERING IF THERE WERE OTHER WAYS OF DOING IT.
THANKS FOR YOUR HELP. i AM PRETTY NEW AT THIS.

Adding Permissions

We created a database with approximately 80 tables and 170 stored procedures
.
We were not too bright to begin with and one thing we did not consider was
not using sa as the login. Now, we need to create a user that does not have
full sa privileges but can execute all stored procedures and view all data i
n
the tables. Some of the stored procedures use dynamic sql (sp_executesql). W
e
also will be creating a new database, which corresponds to a specific test
event, about two or three times a month.
Question I have is can we create one user that can access all the stored
procedures and tables in ALL databases (as they are created) and is there a
fast way of granting privileges to all these objects without having to go in
the permissions for each one?You can create a login and create a user which is in the data_reader role in
every database and grant it rights to execute all stored procedures.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Wannabe" <Wannabe@.discussions.microsoft.com> wrote in message
news:FE43D341-3AD0-459D-AA6A-794CA3E02915@.microsoft.com...
> We created a database with approximately 80 tables and 170 stored
> procedures.
> We were not too bright to begin with and one thing we did not consider was
> not using sa as the login. Now, we need to create a user that does not
> have
> full sa privileges but can execute all stored procedures and view all data
> in
> the tables. Some of the stored procedures use dynamic sql (sp_executesql).
> We
> also will be creating a new database, which corresponds to a specific test
> event, about two or three times a month.
> Question I have is can we create one user that can access all the stored
> procedures and tables in ALL databases (as they are created) and is there
> a
> fast way of granting privileges to all these objects without having to go
> in
> the permissions for each one?
>|||Also note that in SQL Server 2005, you can grant SELECT and EXECUTE
permissions at database level, so you don't need to perform grants for each
table or procedure.
Do you need to do this in all databases or only in databases of a certain
type? If the answer is all, consider setting this up in the model database.
Thanks
Laurentiu Cristofor [MSFT]
Software Design Engineer
SQL Server Engine
http://blogs.msdn.com/lcris/
This posting is provided "AS IS" with no warranties, and confers no rights.
"Hilary Cotter" <hilary.cotter@.gmail.com> wrote in message
news:up6oLty7GHA.1256@.TK2MSFTNGP04.phx.gbl...
> You can create a login and create a user which is in the data_reader role
> in every database and grant it rights to execute all stored procedures.
> --
> Hilary Cotter
> Director of Text Mining and Database Strategy
> RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
> This posting is my own and doesn't necessarily represent RelevantNoise's
> positions, strategies or opinions.
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
>
> "Wannabe" <Wannabe@.discussions.microsoft.com> wrote in message
> news:FE43D341-3AD0-459D-AA6A-794CA3E02915@.microsoft.com...
>

Tuesday, March 6, 2012

Adding new content to a table by refering other tables using SQL

I keep product name, id in Table1.

I keep Category name, id in Table2.

I keep relation between product and category (product_id, category_id) in Table3.

I have added some products to the table with proper category.

Work fine

But for some products I did not specified any category

(ie their id is not present in Table3)

But now I want all such products

(ie all products whose category is nothing)

To be associated with category_id 10

Can I do this simply with SQL queries?

Hope u can help me

sujith

Hellosujith!

UPDATE [TableName] -- in your case it will be TABLE3

SET [FieldName] = 10 -- in your case it will be CATEGORY_ID

WHERE [FieldName] IS NULL -- in your case i dont know if it is NULL or ''

http://www.w3schools.com take a look at this site to learn more about SQL STATEMENT

HAVE NICE QUERY'S

|||

But I don't have any entry in table 3 if the category of a product is nothing.

I insert data into table 3 if and only if a product , belonged to acategory(this is how I save space)

|||

ohhhh...

You want to see all products that doesn't has category_Id and isnert into table 3 with category_Id = 10!

Is this that you want?

Sorry if i am not understanding your question!!

|||

sujithukvl@.gmail.com:

I keep product name, id in Table1.

I keep Category name, id in Table2.

I keep relation between product and category (product_id, category_id) in Table3.

I have added some products to the table with proper category.

Work fine

But for some products I did not specified any category

(ie their id is not present in Table3)

But now I want all such products

(ie all products whose category is nothing)

To be associated with category_id 10

Can I do this simply with SQL queries?

Hope u can help me

sujith

Following Sql statement might suite your need:

insert into product_categoryselect id ,'10'fromproductwherenot exists (select *from product_categorywhere product_category.pro_id = product.id)

Adding new columns to all tables using a script

Hi, I'm trying to add a modified datetime and userid to all 72 tables in my
SQL 2000 database. I have the script to do one table, and a cursor, but it
won't run across all tables. Any help would be appreciated. Thanks...

DECLARE @.tName varchar(40)
DECLARE C1 CURSOR FOR
select name from sysobjects where type = 'U'
OPEN C1
FETCH NEXT FROM C1 INTO @.tName
-- Check @.@.FETCH_STATUS to see if there are any more rows to fetch
WHILE @.@.FETCH_STATUS = 0
BEGIN
-- This is executed as long as the previous fetch succeeds
BEGIN TRANSACTION
SET QUOTED_IDENTIFIER ON
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
SET ARITHABORT ON
SET NUMERIC_ROUNDABORT OFF
SET CONCAT_NULL_YIELDS_NULL ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
COMMIT
BEGIN TRANSACTION
ALTER TABLE @.tName ADD
ModifiedDT datetime NULL,
ModifiedUserID int NULL
GO
COMMIT
FETCH NEXT FROM C1
END
CLOSE C1
DEALLOCATE C1
GOHi

As this is not production code then you may want to check out the
undocumented sp_MSforeachtable

http://groups.google.co.uk/groups?h...2%40tkmsftngp03

http://groups.google.co.uk/groups?h...man%40127.0.0.1

John

"Paul" <psampson@.uecomm.com.au> wrote in message
news:1061944796.500758@.proxy.uecomm.net.au...
> Hi, I'm trying to add a modified datetime and userid to all 72 tables in
my
> SQL 2000 database. I have the script to do one table, and a cursor, but it
> won't run across all tables. Any help would be appreciated. Thanks...
> DECLARE @.tName varchar(40)
> DECLARE C1 CURSOR FOR
> select name from sysobjects where type = 'U'
> OPEN C1
> FETCH NEXT FROM C1 INTO @.tName
> -- Check @.@.FETCH_STATUS to see if there are any more rows to fetch
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> -- This is executed as long as the previous fetch succeeds
> BEGIN TRANSACTION
> SET QUOTED_IDENTIFIER ON
> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
> SET ARITHABORT ON
> SET NUMERIC_ROUNDABORT OFF
> SET CONCAT_NULL_YIELDS_NULL ON
> SET ANSI_NULLS ON
> SET ANSI_PADDING ON
> SET ANSI_WARNINGS ON
> COMMIT
> BEGIN TRANSACTION
> ALTER TABLE @.tName ADD
> ModifiedDT datetime NULL,
> ModifiedUserID int NULL
> GO
> COMMIT
> FETCH NEXT FROM C1
> END
> CLOSE C1
> DEALLOCATE C1
> GO|||Paul (psampson@.uecomm.com.au) writes:
> Hi, I'm trying to add a modified datetime and userid to all 72 tables in
> my SQL 2000 database. I have the script to do one table, and a cursor,
> but it won't run across all tables. Any help would be appreciated.

There are a number of errors in your script:

> DECLARE @.tName varchar(40)
> DECLARE C1 CURSOR FOR

While not an error, I recommend that you make your cursors INSENSITIVE
as a matter of routine. The default keyset-driven cursors can sometimes
give nasty surprises.

> select name from sysobjects where type = 'U'
> OPEN C1
> FETCH NEXT FROM C1 INTO @.tName
> -- Check @.@.FETCH_STATUS to see if there are any more rows to fetch
> WHILE @.@.FETCH_STATUS = 0

I recommend that you write cursor loops as

OPEN cur
WHILE 1 = 1
BEGIN
FETCH cur INTO @.var1, @.var2...
IF @.@.fetch_status <> 0
BREAK
-- Real job follows here.
END
DEALLOCATE cur

By only having one FETCH statement you make your code safer, because it's
easy to change the SELECT statement, and the new column to the first
FETCH, but forget the second, which may be the screens below.

> BEGIN TRANSACTION
> SET QUOTED_IDENTIFIER ON
> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
> SET ARITHABORT ON
> SET NUMERIC_ROUNDABORT OFF
> SET CONCAT_NULL_YIELDS_NULL ON
> SET ANSI_NULLS ON
> SET ANSI_PADDING ON
> SET ANSI_WARNINGS ON
> COMMIT

There is no point in executing the SET statements in the loop, and
there is no point to make this a transaction. Not that it is wrong
either.

> BEGIN TRANSACTION
> ALTER TABLE @.tName ADD
> ModifiedDT datetime NULL,
> ModifiedUserID int NULL
> GO

Here are two serious flaws: ALTER TABLE does not accept a variable.
You need to use dynamic SQL for this. (Or sp_MSforeachtable.)

And the GO there is completely out of place. GO is not an SQL command,
but an instruction to the query tool to separate the commands into
different batches. Thus, this batch will fail with a compilation
error, because the BEGIN after WHILE does not have an END.

> FETCH NEXT FROM C1

And if you thought what I said about FETCH above was silly, look here!
Here you don't insert into a variable, but produce a result set.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Thanks John, I'll check it out

"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:3f4c5f38$0$256$ed9e5944@.reading.news.pipex.ne t...
> Hi
> As this is not production code then you may want to check out the
> undocumented sp_MSforeachtable
>
http://groups.google.co.uk/groups?h...2%40tkmsftngp03
>
http://groups.google.co.uk/groups?h...man%40127.0.0.1
> John
> "Paul" <psampson@.uecomm.com.au> wrote in message
> news:1061944796.500758@.proxy.uecomm.net.au...
> > Hi, I'm trying to add a modified datetime and userid to all 72 tables in
> my
> > SQL 2000 database. I have the script to do one table, and a cursor, but
it
> > won't run across all tables. Any help would be appreciated. Thanks...
> > DECLARE @.tName varchar(40)
> > DECLARE C1 CURSOR FOR
> > select name from sysobjects where type = 'U'
> > OPEN C1
> > FETCH NEXT FROM C1 INTO @.tName
> > -- Check @.@.FETCH_STATUS to see if there are any more rows to fetch
> > WHILE @.@.FETCH_STATUS = 0
> > BEGIN
> > -- This is executed as long as the previous fetch succeeds
> > BEGIN TRANSACTION
> > SET QUOTED_IDENTIFIER ON
> > SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
> > SET ARITHABORT ON
> > SET NUMERIC_ROUNDABORT OFF
> > SET CONCAT_NULL_YIELDS_NULL ON
> > SET ANSI_NULLS ON
> > SET ANSI_PADDING ON
> > SET ANSI_WARNINGS ON
> > COMMIT
> > BEGIN TRANSACTION
> > ALTER TABLE @.tName ADD
> > ModifiedDT datetime NULL,
> > ModifiedUserID int NULL
> > GO
> > COMMIT
> > FETCH NEXT FROM C1
> > END
> > CLOSE C1
> > DEALLOCATE C1
> > GO

Adding New Column & updating bulk data in merge replication

Hi,
We have about 50 databases (SQL Server 2000, SP3) which are merge
replicated. We merge replicate about 100 odd tables in each of these
database. We need to add couple of columns in one of our major transaction
table where most insert/updates are being done. This table presently on
average has 5 lacs records.
During testing, we noticed that it takes about 60-80 minutes to add a
column in this table. Considering the # of database we have where the change
need to implemented, we will not be able to plan the upgrade without
production downtime. For upgrade 50 database it will take about 50 hours.
What are the options available in Replication so this can be done quickly
w/o any production downtime.
Adding to this, in one of the column we have added to the transaction table
, we need to update a new value. On testing we found that for 5 lacs records
it takes anyway between 2-3 hours. This takes roughly another 75 hours for
us to do this update after adding the new column in the table. How can this
be speeded up ?
thanks,
Soura
What's a lacs?
Basically there is probably no good solution for this. I would look at doing
a sync. Then creating my publication and then backing it up and restoring it
to all the subscribers and doing a no sync subscription, or I would look at
regenerating a snapshot and distributing it after you have made the column
change.
Hilary Cotter
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"SouRa" <SouRa@.discussions.microsoft.com> wrote in message
news:A19172BF-E8AC-4BB7-9FED-87EEC5353D89@.microsoft.com...
> Hi,
> We have about 50 databases (SQL Server 2000, SP3) which are merge
> replicated. We merge replicate about 100 odd tables in each of these
> database. We need to add couple of columns in one of our major transaction
> table where most insert/updates are being done. This table presently on
> average has 5 lacs records.
> During testing, we noticed that it takes about 60-80 minutes to add a
> column in this table. Considering the # of database we have where the
> change
> need to implemented, we will not be able to plan the upgrade without
> production downtime. For upgrade 50 database it will take about 50 hours.
> What are the options available in Replication so this can be done quickly
> w/o any production downtime.
> Adding to this, in one of the column we have added to the transaction
> table
> , we need to update a new value. On testing we found that for 5 lacs
> records
> it takes anyway between 2-3 hours. This takes roughly another 75 hours for
> us to do this update after adding the new column in the table. How can
> this
> be speeded up ?
> thanks,
> Soura
|||5 lacs is 500 K or 500 thousand i.e 500,000
lacs is primarily an indian unit of measurment. 1 lac is 0.1 million
"Hilary Cotter" wrote:

> What's a lacs?
> Basically there is probably no good solution for this. I would look at doing
> a sync. Then creating my publication and then backing it up and restoring it
> to all the subscribers and doing a no sync subscription, or I would look at
> regenerating a snapshot and distributing it after you have made the column
> change.
> --
> Hilary Cotter
> Looking for a SQL Server replication book?
> http://www.nwsu.com/0974973602.html
> Looking for a FAQ on Indexing Services/SQL FTS
> http://www.indexserverfaq.com
> "SouRa" <SouRa@.discussions.microsoft.com> wrote in message
> news:A19172BF-E8AC-4BB7-9FED-87EEC5353D89@.microsoft.com...
>
>

Adding mutiple columns

HI,
I have two tables A and B with following Data

Table A

ID(PK) | V | W | X | Final
-
1 | 4 | 5 | 6 | output

Table B

ID | Par_ID (FK) | Cost_Per_Annum

1 | 1 | 12000
2 | 1 | 24000
3 | 1 | 14000

Output should be calculated using below the formula .

output = Column V * 1st record of par_id=1 + Column w * 2nd record of par_id=1 + Column X * 3rd record of par_id=1

output = 4* 12000 + 5* 24000 + 6 * 14000

How can i do like this i have tried several ways and failed. Please suggest me.

hi Rajesh,

Try this out

Code Snippet

declare @.TableA table(ID int,V int,W int, X int )
declare @.TableB table(ID int,Par_ID int,Cost_Per_Annum int)
insert into @.TableA select 1,4,5,6

insert into @.TableB select 1,1,12000
insert into @.TableB select 2,1,24000
insert into @.TableB select 3,1,14000


select a.*,
a.v * (
select top 1 v.Cost_Per_Annum from
( select top 1 Cost_Per_Annum,ID from @.TableB where Par_ID=a.ID
) v order by id desc )+
a.W *(
select top 1 v.Cost_Per_Annum from
( select top 2 Cost_Per_Annum,ID from @.TableB where Par_ID=a.ID
) v order by id desc)+
a.X *(
select top 1 v.Cost_Per_Annum from
( select top 3 Cost_Per_Annum,ID from @.TableB where Par_ID=a.ID
) v order by id desc ) Final
from @.TableA a

|||

This should get you started.

Code Snippet

declare @.t1 table (i int, v int, w int, x int)
insert @.t1 select 1,4,5,6

declare @.t2 table (iii int, i int, c int)
insert @.t2 select 1,1,12000
union all select 2,1,24000
union all select 3,1,14000

select a.i,sum(vv*c) as [output]
from(
select * ,case ii when 'v' then 1 when 'w' then 2 when 'x' then 3 end as iii
from (select i, v,w,x from @.t1) as p
unpivot
(vv for ii in(v,w,x)) as unpvt
)a
join @.t2 b on a.i=b.i and a.iii=b.iii
group by a.i

|||Output printing wrong values. Your query value is 202000
Actual output is 252000.

I think problem in 16th line. Can you please check.

thanks|||Please copy the query block as it is and check once more , iam getting the output 252000 with the same query.|||i am getting 202000. Please refer following link for screen shot .

http://picavo.com//images/371729err.JPG

Raj
|||It is really strange, still working for me. Which version of sql server r u using?|||I am using sqlserver 2000.

|||I had checked it on MSDE with SP3 and Sql server Express edition, working on both.|||can i report this problem to Microsoft bug team?
|||

Try:

Code Snippet

createtable dbo.t1 (

ID int,

V int,

W int,

X int

)

go

insertinto dbo.t1 values(1, 4, 5, 6)

insertinto dbo.t1 values(2, 7, 8, 9)

go

createtable dbo.t2 (

ID int,

Par_ID int,

Cost_Per_Annum int

)

go

insertinto dbo.t2 values(1, 1, 12000)

insertinto dbo.t2 values(2, 1, 24000)

insertinto dbo.t2 values(3, 1, 14000)

insertinto dbo.t2 values(4, 2, 10000)

insertinto dbo.t2 values(5, 2, 20000)

insertinto dbo.t2 values(6, 2, 30000)

go

select

x.ID,

(V * Cost_Per_Annum_V)+(W * Cost_Per_Annum_W)+(X * Cost_Per_Annum_X)as [output]

from

dbo.t1 as x

innerjoin

(

select

c.Par_ID,

max(casewhen d.rn = 1 then c.Cost_Per_Annum end)as Cost_Per_Annum_V,

max(casewhen d.rn = 2 then c.Cost_Per_Annum end)as Cost_Per_Annum_W,

max(casewhen d.rn = 3 then c.Cost_Per_Annum end)as Cost_Per_Annum_X

from

dbo.t2 as c

innerjoin

(

select

a.Par_ID,

a.ID,

count(*)as rn

from

dbo.t2 as a

innerjoin

dbo.t2 as b

on a.Par_ID = b.Par_ID

and a.ID >= b.ID

groupby

a.Par_ID,

a.ID

)as d

on c.Par_ID = d.Par_ID

and c.ID = d.ID

groupby

c.Par_ID

)as y

on x.ID = y.Par_ID

go

droptable dbo.t1, dbo.t2

go

AMB

Adding more tables than fit in the workspace

I have a report which needs 9 tables. I got the first three working but
when I went to insert the 4th - there was no "whitespace" for me to paste
into. Each of these sections will trigger a page break so maybe I just
overlap them on the screen? Is there a way to make the working space
bigger or some other solution? Thanks!Go to properties and select Body. Set the Size Height to what you desire.
"Don Parker" wrote:
> I have a report which needs 9 tables. I got the first three working but
> when I went to insert the 4th - there was no "whitespace" for me to paste
> into. Each of these sections will trigger a page break so maybe I just
> overlap them on the screen? Is there a way to make the working space
> bigger or some other solution? Thanks!
>
>
>|||Report Designer's layout surface can be resized by
* Dragging the right or bottom edge of the layout surface.
* Entering a new value for the body height or width using the properties
window.
--
Bruce Johnson [MSFT]
Microsoft SQL Server Reporting Services
This posting is provided "AS IS" with no warranties, and confers no rights.
"B. Mark McKinney" <BMarkMcKinney@.discussions.microsoft.com> wrote in
message news:1CB6C09D-BF49-4B4F-AB7C-7CA57264CAA3@.microsoft.com...
> Go to properties and select Body. Set the Size Height to what you desire.
> "Don Parker" wrote:
>> I have a report which needs 9 tables. I got the first three working but
>> when I went to insert the 4th - there was no "whitespace" for me to paste
>> into. Each of these sections will trigger a page break so maybe I just
>> overlap them on the screen? Is there a way to make the working space
>> bigger or some other solution? Thanks!
>>
>>