Showing posts with label file. Show all posts
Showing posts with label file. Show all posts

Thursday, March 29, 2012

Adjusting reportwidth dynamically.. please help me

hi,

I am new to reporting services.I am using sqlserver Reporting services to generate Report.I have designed a static RDL file based on a view which contains 40 columns..i included all the 40 column in rdl file..i have used hidden field expressions for hiding columns..Now at run time if i select three columns for generating report, The remaining hidden column spaces are still there in the report ... The report width is not adjusting to the selected column width... how to eliminate the hidden column spaces in the report.Please help me..it is really urgent for me..

Thanks in advance...

You can change the width of the ReportViewer in the Load() event like so...

ReportViewer1.Width=500

I think if you try to do it in another event it has no effect becasue the size has already been rendered.

Good luck.

|||

try to set the width = "auto"

Sunday, March 25, 2012

additional data files not filing

We have a quad sql server that runs OLTP transactions at the rate of
100's per second (read & Write).

We used to have all the tables on 1 file but started to notice high contention on this file. We added 3 more files to match the processor number. The problem is that the 3 additional files are not filling with data. Does anyone know why this happens or can reccommend a fix?
--
willVerify that the newly added files are in the appropriate file group (probably PRIMARY) and are of the appropriate file type (ie, they were created as data files, not log files). Also, your new file(s) may not get data written to them if the old file still has space available.

Your logic for reducing contention by adding files to match the number of processors does not make sense. Contention is caused by physical IO going to the same physical disk and getting backed up because the pipe to the physical disk is too small. You would add files to a database filegroup to ease contention by creating the files on separate physical disks. It has no relation the number of CPUs.

Regards,

hmscott|||Yes they are all the same file type (data files).

Microsoft claims that you should have a seperate file per processor for high contention OLTP sql servers. Parallelism is the goal here. Sql server can only access 1 file per cpu at any one time. If it was a dual processor machine It would have less impact considering the OS & tempdb usually have something going on in parallel.

Considering we have a quad sql server = 4 files per file group.

As an FYI we are running a raid 10 with 14 disks.|||Hmm, not doubting you, but could you send a link with the MS info? I had not seen that before.

As for your issue, what's the state of the original data file? Full, nearly full or not even close? Also, can you verify that the new files you added were updateable (ie, not read-only)? Beyond that, I would try testing some scenarios out on a development server somewhere.

Regards,

hmscott

Adding/Previewing custom table style

Hi,

I need to add and preview custom table styles. I was able to add custom style by editing the StyleTemplates.xml file under "C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\PrivateAssemblies\Business Intelligence Wizards\Reports\Styles' as per one of the postings here.

Now, how should I add the ability for the user to preview this custom style. I can write a form similar to MS "Choose table style" proobably. Is there a way to launch a particular page of the report wizard programatically?

Thanks,

-Surendra

I have the same problem, Have you find the solution?

Adding/Previewing custom table style

Hi,

I need to add and preview custom table styles. I was able to add custom style by editing the StyleTemplates.xml file under "C:\Program Files\Microsoft Visual Studio 8\Common7\IDE\PrivateAssemblies\Business Intelligence Wizards\Reports\Styles' as per one of the postings here.

Now, how should I add the ability for the user to preview this custom style. I can write a form similar to MS "Choose table style" proobably. Is there a way to launch a particular page of the report wizard programatically?

Thanks,

-Surendra

I have the same problem, Have you find the solution?

Thursday, March 22, 2012

adding to noise words file (noise.enu)... new words not taking effect?

For some reason I cannot get these new noise words to be
taken into account. I have stopped the Microsoft Search
service, edited the noise.enu file, started the service
again, and created a new index on my table.
Unfortunately, none of the noise words I have added have
taken effect.
Please advise if you have any suggestions.
Thanks,
Rick
noise.enu is for American English, noise.eng is for British English. Did you
do a full population? For words which were previously noise words to be
indexable and queryable you must do a full population.
Where did you edit your noise word list? C:\Program Files\Microsoft SQL
Server\MSSQL\FTdata\SQLServer\Config is the correct location for Win2k.
"Rick" <rmemmer@.carolina.rr.com> wrote in message
news:161c01c4b2dc$31e66af0$a601280a@.phx.gbl...
> For some reason I cannot get these new noise words to be
> taken into account. I have stopped the Microsoft Search
> service, edited the noise.enu file, started the service
> again, and created a new index on my table.
> Unfortunately, none of the noise words I have added have
> taken effect.
> Please advise if you have any suggestions.
>
> Thanks,
>
> Rick
>
|||Thanks for your reply.
Oh my... GREAT POINT! I was using the wrong noise.enu
file. When I read up on this the article stated it was
under the System32 folder, and it is. The entire point
of the article was editing that file, so I have no idea
why they would suggest using that location. I'm guessing
the multiple locations has something to do with being
able to specify an individual/local list, as opposed to
just one under the SQL Server folders.
Thanks again... much appreciated!

>--Original Message--
>noise.enu is for American English, noise.eng is for
British English. Did you
>do a full population? For words which were previously
noise words to be
>indexable and queryable you must do a full population.
>Where did you edit your noise word list? C:\Program
Files\Microsoft SQL
>Server\MSSQL\FTdata\SQLServer\Config is the correct
location for Win2k.[vbcol=seagreen]
>
>"Rick" <rmemmer@.carolina.rr.com> wrote in message
>news:161c01c4b2dc$31e66af0$a601280a@.phx.gbl...
be[vbcol=seagreen]
have
>
>.
>
|||"Rick" <rmemmer@.carolina.rr.com> wrote in message
news:057101c4b2f4$8a09f810$a501280a@.phx.gbl...
> Thanks for your reply.
> Oh my... GREAT POINT! I was using the wrong noise.enu
> file. When I read up on this the article stated it was
> under the System32 folder, and it is. The entire point
> of the article was editing that file, so I have no idea
> why they would suggest using that location. I'm guessing
> the multiple locations has something to do with being
> able to specify an individual/local list, as opposed to
> just one under the SQL Server folders.
AFAIK the files in the System32 folder are for the non-SQL indexes for
Microsoft Search - so those will be for indexing files, etc.
Dan
|||actually they are for Index Server/Indexing Services which is not MSSearch,
but is similar to it.
"Daniel Crichton" <msnews@.worldofspack.co.uk> wrote in message
news:OakbI2QtEHA.2532@.TK2MSFTNGP10.phx.gbl...
> "Rick" <rmemmer@.carolina.rr.com> wrote in message
> news:057101c4b2f4$8a09f810$a501280a@.phx.gbl...
> AFAIK the files in the System32 folder are for the non-SQL indexes for
> Microsoft Search - so those will be for indexing files, etc.
> Dan
>

Tuesday, March 20, 2012

Adding the Date to Filename of DTS Export

I'd like to add the date to the file name of a DTS Export.
For Example:
Export092503.xls
I've tried various methods but nothing has worked as of yet.
Any ideas?
Thanks in Advance.
TechRickI export a standard file name, then rename using a BAT file. Here's the BAT file I use...

-------------------
@.echo off

c:
cd \SQL_Support_Applications\FTP Scripts\SONIC_Data

IF EXIST CUST_%DATE:~10,4%-%DATE:~4,2%-%DATE:~7,2%.dat DEL CUST_%DATE:~10,4%-%DATE:~4,2%-%DATE:~7,2%.dat

IF EXIST CUST_raw.dat REN CUST_raw.dat CUST_%DATE:~10,4%-%DATE:~4,2%-%DATE:~7,2%.dat

EXIT
-------------------

It checks to see if the dated file exists and deletes before renaming.|||Hi TechRick!

If I get you correct, you run an data export task in a dts package.

In this case you can use an ActiveXTask in this package to get the system date and modify the properties of the export task. A similar example is shown here: http://www.sqldts.com/default.aspx?231

Hope this helps you!

Greetings,
Carstensql

Adding the date to a subscription report name

Does anyone have information on how I can add a file date to my named report when a subscription writes that file to a network folder? I am exporting as .pdf files. For example, today I have filename abc.pdf but I'd like it to be abc03312006.pdf. Likewise, tomorrow's file would be abc04012006.pdf.

Any help is greatly appreciated.

You have to go for reporting services programming and create a subscription programatically and pass the date as a value to subscription's description parameter like this.

service.CreateSubscription(reportpath, extSettings, description, eventType, matchData, parameters );

|||

Can you please give more details on it? Where to go and how to do this?

Thanks,

-Rohit

Adding the date to a subscription report name

Does anyone have information on how I can add a file date to my named report when a subscription writes that file to a network folder? I am exporting as .pdf files. For example, today I have filename abc.pdf but I'd like it to be abc03312006.pdf. Likewise, tomorrow's file would be abc04012006.pdf.

Any help is greatly appreciated.

You have to go for reporting services programming and create a subscription programatically and pass the date as a value to subscription's description parameter like this.

service.CreateSubscription(reportpath, extSettings, description, eventType, matchData, parameters );

|||

Can you please give more details on it? Where to go and how to do this?

Thanks,

-Rohit

Adding table to report file

I am very new to the SQL report server, but have used other reporting tools before where I could add (or import) a new table of information directly into a report itself. I would do this when I required information in the report that is not part of the datasources tables.

For example, I use it for tying into the report complex sales budget numbers that are not included anywhere in the main datasource. I would simply add the table to my query I created, create my links, and I have a sales vs budget report.

Do you know if this is possible using the SQL reporting tools, or does the table need to be part of the main datasource ?

Thank you in advance.....

Chris B.

hi,

You can create a new dataset with different data source, then embed the table into your report as a subreport.

regards,

Janos

|||

Thanks Janos,

I have not tried it yet, so I will have to play with it a bit to see if that will work for me.

So I assume I can link the sub report based on sales rep, month, category, and customer....

Thanks,

Chris B

Monday, March 19, 2012

Adding SQL Server Database file to the website

Hi!!!

I have a database called "test_DB" i have one table and some data on it. I uploaded the website and created an SQL Server account to upload the database. Can you please help me to do that task. Is there any step by step directions to do it and what files i need to add ( like ".mdf" or others...) I appreciate your help!!!

Thanks...

You are using an ISP for the hosting? They should have some way of setting that up. You probably should send them an email for uploading a sql server database. If you own the server, do a database detach/attach, which is simple.

adding secondary database file

Question: When you add secondary files to an already
existing database does SQL automatically stripe existing
data across the new files or only when new data is created?I believe only if you add new data, or do things like rebuild a clustered
index (assuming both files, and the table, are in the same filegroup).
--
Aaron Bertrand
SQL Server MVP
http://www.aspfaq.com/
"John Gossett" <jgossett@.nwedge.com> wrote in message
news:008e01c3c666$b3fbe6e0$a601280a@.phx.gbl...
> Question: When you add secondary files to an already
> existing database does SQL automatically stripe existing
> data across the new files or only when new data is created?|||SQL uses a process called proportional fill. It tries to keep each of the
data file the same percentage full. It does this by writing all new data to
the new file until it is as full as the existing data file..
It does NOT re-balance the existing data. You must do this by drop/re-create
the clustered index or export/import the data..
--
Wayne Snyder, MCDBA, SQL Server MVP
Computer Education Services Corporation (CESC), Charlotte, NC
www.computeredservices.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"John Gossett" <jgossett@.nwedge.com> wrote in message
news:008e01c3c666$b3fbe6e0$a601280a@.phx.gbl...
> Question: When you add secondary files to an already
> existing database does SQL automatically stripe existing
> data across the new files or only when new data is created?

Adding SDF data into a Listview - C#

Hello everyone, I finally got the SDF file on my pocket pc, and now am having trouble loading the data into the listview. I am not sure where I am going wrong, but the code is listed below. Any ideas? Thanks so much, everyone.

private void LoadList(string szSQL)

{

// Clear out anything that might already be in the list

CollectionList.Items.Clear();

// save off the SQL Statement

m_szCurrFilter = szSQL;

// Throw Data into the list

SqlCeCommand sqlLoadEvents = m_cConn.CreateCommand();

sqlLoadEvents.CommandText = szSQL;

SqlCeDataReader rReader = sqlLoadEvents.ExecuteReader();

// roll through the reader and put items into the list

while ( rReader.Read() )

{

ListViewItem lvItem = new ListViewItem(rReader.GetValue(rReader.GetOrdinal("ca_id")).ToString()); //Nmbr

// Note: Since a "tag" is not supported by the compact framework, we'll use the "ImageIndex"

// property to hold the movie ID, this way we can find it later when the user clicks

// on an item

lvItem.ImageIndex = (int)rReader.GetValue(rReader.GetOrdinal("ca_id"));

lvItem.SubItems.Add(rReader.IsDBNull(rReader.GetOrdinal("ca_titel")) ? "" : rReader.GetValue(rReader.GetOrdinal("ca_titel")).ToString()); // Title

lvItem.SubItems.Add(rReader.IsDBNull(rReader.GetOrdinal("ca_alter")) ? "" : rReader.GetValue(rReader.GetOrdinal("ca_alter")).ToString()); // Rating

lvItem.SubItems.Add(rReader.IsDBNull(rReader.GetOrdinal("ca_sex")) ? "" : rReader.GetValue(rReader.GetOrdinal("ca_sex")).ToString());// Gender

CollectionList.Items.Add( lvItem );

}

}

I think I have narrowed it (somewhat) down:

The following item is giving me problems, I think.

LoadList("SELECT * FROM fragenkatalog");

-

fragenkatalog is the table in the database.

Thanks again.

|||

those calls to getOrdinal("columnName") combined with a SELECT * command are going to make your code as slow as possible.

the goal you have of storing the ID in the ImageIndex is not going to work, but what you are trying to do is what I call "poor-man's databinding" and I understand why you are doing it. In controls like datagrids and listviews, if you want ultimate performance (meaning, avoiding binding), you can create an extra column in your ListView, set it's width to zero, and in this column, store the ID you need on the row.

Here is a sample of filling a listview with the results of a complex database query. Note that we explicitly list the columns needed in the SQL, we refer to the ordinals by offset, close/disposee the data reader, etc. In this example, there are 2 columns with a width of zero that store GUIDs that the application needs for each row selected - poor mans databinding.

private void _populateTodaysTasks()
{
string sql = "SELECT a.ScheduledStartDate, b.Name, a.Name, c.Name, a.ID, c.ID, a.ActualStartDate, a.ActualEndDate, a.Synchronized " +
"FROM Job a, Customer b, Form c, Site d " +
"WHERE a.TechnicianID = '" + Globals.GetInstance().TechnicianID + "' " +
"AND a.SiteID = d.ID AND d.CustomerID = b.ID " +
"AND a.FormID = c.ID " +
"AND a.Synchronized = 0 " +
"ORDER BY a.ScheduledStartDate";

SqlCeDataReader dtr = DatabaseManager.GetInstance().ExecuteQuery(sql);

if ( null != dtr )
{
while ( dtr.Read() )
{
ListViewItem lvi = new ListViewItem(dtr.GetString(0));
lvi.SubItems.Add(dtr.GetString(1).Trim());
lvi.SubItems.Add(dtr.GetString(2).Trim());
lvi.SubItems.Add(dtr.GetString(3).Trim());
lvi.SubItems.Add(dtr.GetGuid(4).ToString()); // jobID
lvi.SubItems.Add(dtr.GetGuid(5).ToString()); // formID
string start = dtr.GetString(6).Trim();
string end = dtr.GetString(7).Trim();
bool sent = dtr.GetBoolean(8);

// removed some code that uses start and end to determine if a task is past due

lvwTasks.Items.Add(lvi);

}
dtr.Close();

dtr.Dispose();

}

}

Darren

|||

Thank you Darren, I appreciate your help.

I will try this out later on and let you know how I did. :)

I really appreciate it!

Martina

|||

Well, I'm confused.

I have re-thought everything again, gone through the code, and have simplified it somewhat (I think). But I am still not getting the list of items in the listview box. In my sdf file, I have 1 Table (fragenkatalog) and within it, I have 5 columns (id (int), title (nvarchar (150)), picture (nvarchar (100)), age (int), and gender (nvarchar (1)) ) -- but I am only listing the first 3 in the listbox.

The way the code is now, I am not getting any error messages when I debug.

I am including the code for the application.... I'm at such a loss, and I've been using the ".NET Compact Framework KickStart" book (Rubin, Yates) and have not been having much luck with finding my answer there.

If someone could give me a helping hand, I would be very appreciative.

Thanks again,

Martina

using System;

using System.Drawing;

using System.Collections;

using System.ComponentModel;

using System.Windows.Forms;

using System.IO;

using System.Text;

using System.Data;

using System.Data.SqlServerCe;

using System.Data.Common;

namespace PedBoneView

{

/// <summary>

/// This is the default form.

/// <summary>

public class Form1 : System.Windows.Forms.Form

{

private System.Windows.Forms.ColumnHeader columnHeader1;

private System.Windows.Forms.ColumnHeader columnHeader2;

private System.Windows.Forms.ColumnHeader columnHeader3;

private System.Windows.Forms.MenuItem New;

private System.Windows.Forms.ListView lVListe;

private System.Windows.Forms.ColumnHeader columnHeader4;

private System.Windows.Forms.ColumnHeader columnHeader5;

private System.Windows.Forms.ColumnHeader columnHeader6;

private System.Windows.Forms.MainMenu mainMenu1;

SqlCeConnection conn = null;

private System.Windows.Forms.ContextMenu contextMenu1;

private System.Windows.Forms.MenuItem menuItem2;

private System.Windows.Forms.MenuItem menuItem3;

SqlCeCommand cmd = null;


public Form1()

{

//

// Required for Windows Form Designer support

//

InitializeComponent();

//

// TODO: Add any constructor code after InitializeComponent call

//

}

/// <summary>

/// Clean up any resources being used.

/// </summary>

protected override void Dispose( bool disposing )

{

base.Dispose( disposing );

}

static void Main()

{

Application.Run(new Form1());

}


private void Form1_Load(object sender, System.EventArgs e)

{

try

{

if (!File.Exists ("//My Documents//pedbone.sdf")) // If the database doesn't exists, create it

{

SqlCeEngine engine = new SqlCeEngine ("Data Source = //My Documents//pedbone.sdf");

engine.CreateDatabase ();

conn = new SqlCeConnection ("Data Source = //My Documents//pedbone.sdf");

conn.Open();

cmd = conn.CreateCommand();

cmd.CommandText = "CREATE TABLE fragenkatalog(ca_id int PRIMARY KEY IDENTITY(1,1), ca_titel nvarchar(150), ca_bildurl1 nvarchar(100), ca_alter int, ca_sex nvarchar(1))";

cmd.ExecuteNonQuery();

conn.Close();

}

else // else initialize the connection

{

conn = new SqlCeConnection ("Data Source = //My Documents//pedbone.sdf");

cmd = conn.CreateCommand();

UpdateLView();

}

}

catch (SqlCeException ex)

{


}

}

//New entry

private void New_Click(object sender, System.EventArgs e)

{

New newpbdata = new New(this);

newpbdata.Show();

}

public void UpdateLView() //This function updates the main listview

{

// lVListe.Items.Clear();

conn.Open();

cmd.CommandText = "SELECT ca_id int, ca_titel nvarchar(150), ca_bildurl1 nvarchar(100) FROM fragenkatalog";

SqlCeDataReader rdr = cmd.ExecuteReader();


if ( null != rdr )

while(rdr.Read())

{

ListViewItem lvi = new ListViewItem(rdr.GetString(0));

lvi.SubItems.Add(rdr.GetString(1));

lvi.SubItems.Add(rdr.GetString(2));

lvi.SubItems.Add(rdr.GetString(3));

lVListe.Items.Add(lvi);

}

rdr.Close();

rdr.Dispose();

}

}

}

|||

While you do create a database and a table, you have not inserted any actual data into the table before you attempt to read from the table and populate the listview.

Try inserting some data into the database table.

-ds

|||

I already have data in the database table (about 750 rows).

Is that not possible to do?

Technically, I just want to show the data that's in the table, not edit, add, or remove any of the data.

Thanks again.... :)

|||

How is the ListView itself configured? Did you use the VS2005 designer to add columns and set the view to Details with/without checkboxes or are you doing this in code?

Be sure you have created ListViewColumns that match what you are trying to display in the ListView and that you have set the Default View to Details.

Post the portion of the InitializeComponent method that initializes the listview if you want me to review that.

This should not be so difficult - there is something small and easy you have wrong and we just need to find it.

Darren

|||

Darren,

Thank you so much for your help. I'm really feeling like an idiot at the moment, simply because I cannot find the problem myself. I was getting worried that it might be my sdf file....

I'm using code to do this. I am using MS Development Enviroment 2003...

I hope I've provided the code you need.

Thank you very much again...

Martina

--

private void InitializeComponent()

{

this.mainMenu1 = new System.Windows.Forms.MainMenu();

this.New = new System.Windows.Forms.MenuItem();

this.lVListe = new System.Windows.Forms.ListView();

this.columnHeader1 = new System.Windows.Forms.ColumnHeader();

this.columnHeader2 = new System.Windows.Forms.ColumnHeader();

this.columnHeader3 = new System.Windows.Forms.ColumnHeader();

this.columnHeader4 = new System.Windows.Forms.ColumnHeader();

this.columnHeader5 = new System.Windows.Forms.ColumnHeader();

//

//

// lVListe

//

this.lVListe.Columns.Add(this.columnHeader1);

this.lVListe.Columns.Add(this.columnHeader2);

this.lVListe.Columns.Add(this.columnHeader3);

this.lVListe.Columns.Add(this.columnHeader4);

this.lVListe.Columns.Add(this.columnHeader5);

this.lVListe.ContextMenu = this.contextMenu1;

this.lVListe.FullRowSelect = true;

this.lVListe.Size = new System.Drawing.Size(240, 216);

this.lVListe.View = System.Windows.Forms.View.Details;

// columnHeader1

//

this.columnHeader1.Text = "ID";

this.columnHeader1.Width = 30;

//

// columnHeader2

//

this.columnHeader2.Text = "Title";

this.columnHeader2.Width = 80;

//

// columnHeader3

//

this.columnHeader3.Text = "Image";

this.columnHeader3.Width = 60;

//

// columnHeader4

//

this.columnHeader4.Text = "Age";

this.columnHeader4.Width = 35;

//

// columnHeader5

//

this.columnHeader5.Text = "Sex";

this.columnHeader5.Width = 35;

//

// Form1

//

this.Controls.Add(this.lVListe);

this.Menu = this.mainMenu1;

this.Text = "PedBone Pocket";

this.Load += new System.EventHandler(this.Form1_Load);

|||

I think I see the problem.

The ListView object you define has 5 columns:

ID Title Image Age Sex

and the ListView items you are trying to add to the ListView only have 3 columns resulting from this select statement

SELECT ca_id int, ca_titel nvarchar(150), ca_bildurl1 nvarchar(100) FROM fragenkatalog

the number of SubItems in a ListViewItem must be one less than the number of columns defined in the ListView itself. So, in other words, you create a new ListViewItem and give it the name 'ID'. If there are 5 total columns defined in the ListView, you need to add 4 more subitems to this ListViewItem so that 'ID' plus the other 4 subitems constitute a total of 5 items which match to 5 columns.

Darren

|||

Hmmmmmmm.....

That didn't seem to work... Or have I missed something again? I have triple-checked upper- and lower case, so that's not it. I checked the nvarchar values, and those seem to be okay too. I did upper case ID and lower case id, ca_id too.

I'm still not getting any errors, but also nothing in the ListView.

Here's the updated code....

Thank you again, Darren.

public void UpdateLView() //This function updates the main listview

{

lVListe.Items.Clear();

conn.Open();

cmd.CommandText = "SELECT ca_id int, ca_titel nvarchar(150), ca_bildurl1 nvarchar(100), ca_alter int, ca_sex nvarchar(1) FROM fragenkatalog";

SqlCeDataReader rdr = cmd.ExecuteReader();

if ( null != rdr )

while(rdr.Read())

{

ListViewItem id = new ListViewItem(rdr.GetString(0)); //this should be the id #

id.SubItems.Add(rdr.GetString(1)); //title

id.SubItems.Add(rdr.GetString(2));//picture

id.SubItems.Add(rdr.GetString(3));//age

id.SubItems.Add(rdr.GetString(4));//gender

lVListe.Items.Add(id);

}

rdr.Close();

rdr.Dispose();

}

|||

Just email me your SDF file and I will send you the working code - newsgroups are awful for this sort of thing and it would just be faster for me to fix it and send it to you.

Do you want all 5 columns in the listview (or did you only want the 3 you originally had in the sql select statement?)

Google me and contact me through my BLOG.

Darren Shaffer

|||

I did receive your SDF file and I have sent you back a working solution with a note explaining the necessary changes. Check your email.

Darren

Sunday, March 11, 2012

Adding Row Numbers to Flat File Source

Hi,

I was wondering if it was possible to add an identity column to a flat file data source as it is being processed in a data flow. I need to know the record number of each row in the file. Can this be done with the derived column task or is it possible to return the value of row count on each row of the data?

Any help on this is greatly recieved.

Cheers,

Grant

Generating Surrogate Keys
(http://www.sqlis.com/default.aspx?37)

-Jamie

|||

You may have more than one option here; I would try with a script task in control flow that uses a variable to generate and adds it as a column to the pipeline.

Create a SSIS variable Int32. eg MyIdentityColSeed that is set to zero.

Then, in your data flow, after the flat file source create a script task with 1 output column (named for example MyIdentityCol) and use and script like this:

Public Overrides Sub Input0_ProcessInputRow(ByVal Row As Input0Buffer)

Row.MyIdentityCol = Variables.MyIdentityColSeed + Counter

Counter += 1

End Sub

Even when the code is very simple; I did not tested it, so take the time to debug it.

|||Rafael, Jamie,

Thanks for both your posts on this matter. I'm looking into Rafael's suggestion at present and see that the only way to access variables in the data flow section is within the PostExecute phase. That problem with this is that it doesn't seem to update the variable as i move through the rows in the dataflow. Is this because of how SSIS has been developped. Can you not set a variable anywhere else within the script. I'll look at Jamie's suggestion once this one has been exhausted.

Thanks,

Grant|||

The Pre and Post execute limitation is just that. It also makes more sense as you only need to read and write the variables pre/post, which is faster than accessing them for every row. Use local variables in the actual main execution code.

You may want to take a quick look at the Row Number Transformation, it may do exactly what you want with seed and increment settings already- http://www.sqlis.com/default.aspx?93

|||Got Rafaels suggestion to work. Didn't need to use a variable in the end just a counter that was initialized on the preexecute event which was incremented for each row. Many thanks for everyone's help.

Yet more new things learned today :) .

Thank you all.

Grant|||

Grant Swan wrote:

Rafael, Jamie,

Thanks for both your posts on this matter. I'm looking into Rafael's suggestion at present and see that the only way to access variables in the data flow section is within the PostExecute phase. That problem with this is that it doesn't seem to update the variable as i move through the rows in the dataflow. Is this because of how SSIS has been developped. Can you not set a variable anywhere else within the script. I'll look at Jamie's suggestion once this one has been exhausted.

Thanks,

Grant

Errr...both Rafael's and my suggestions were exactly the same

Glad you got it working anyhow!

-Jamie

|||

Grant,

I think Jamie is right; the approaches are identical. I do not keep records of web resources I used; so I was unable, like Jamie, to post a link to the original sorce. I guess a owe some credits to author of the script

Rafael Salas

|||Jamie,

Sorry, I'd had a previous link open from the same website and was looking at this which had some resemblence to what i wanted to do:

http://www.sqlis.com/default.aspx?93

As apposed to the link i'd clicked from your post. Apologies and i'll mark your post as an answer also.

Its been a bad day and i've got so many SSIS things in my head :)

Cheers,

Grant|||

Grant Swan wrote:

Jamie,

Sorry, I'd had a previous link open from the same website and was looking at this which had some resemblence to what i wanted to do:

http://www.sqlis.com/default.aspx?93

As apposed to the link i'd clicked from your post. Apologies and i'll mark your post as an answer also.

Its been a bad day and i've got so many SSIS things in my head :)

Cheers,

Grant

Ha ha. Cool. I'm not bothered about getting my "answer count" up, as long as the thread has at least one reply marked as the answer. thanks anyway tho!

-Jamie

adding precedence to multiple files

hi all,

i have a package here which updates a DB from a flat file source.now the problem is i may get multiple files.i have used a for each loop to handle this. it takes files based on the files name9(file names has a timestamp in it).but i want to give files in order of its Creation time.

Please help me on this.i have written a script task before the for each loop and i have got the minimum creation date from all the files,i am not able to going forward from here.

does any body has an idea!!

ASAIK, the files are show in creation order in the for each loop. However, I am not sure this is gospel.

You could use a script task to load a list of files and sort the list by the file attributes. using this sorted list, you could then loop over the list using the for each loop container.

If you wanted to get clever, modify the For Each Directory example in the MS SQL 2005 example. Create your own file enumerator, ensuring the files are in the order you want.|||Generally, it does sort by filename - but there are no guarantees of this. If you want to ensure the sort order is correct, follow Crispin's advice to modify the For Each Directory sample, or use the sample posted here (http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1439218&SiteID=1) by jaegd. Or read your directory, save each filename to a temp table, and use SQL sorting.

Thursday, March 8, 2012

Adding Parameters from .ASPX file

HI,
I want to show results in a report based on a variable send from an .aspx file. Is it possible to do this in de report designer. I have checked the possibilities for adding parameters but as far as I know the only way add parameters to queries in the report designer is, a prompt, a text-box or dropdown. Can I also add code like <% Request.Form(variable) %>.
This would really help me alot so I hope somebody knows the answer. Mucho gracias.
capsync

Hi,

It is quite possible to do. I've posted this code in one of my previous post. This should help you quite a bit, place this code in an aspx file's button click code behind event.


I created a 'objReport' object which was a SRS report object. You should focus on the response.write part.
[ReportServer] = SRS reportserver URL
[Path] = Report Location eg. '/folder/myReport'
[ReportParameters] = Well anything
string strFullParameters = string.Format(objReport.Parameters, strParams);
strFullParameters = strFullParameters.Replace("/", "%2f");
strFullParameters = strFullParameters.Replace(":", "%3a");
strFullParameters = strFullParameters.Replace(" ", "+");

Response.Write("<DIV><IFRAME ID=SQLReportsFrame SRC='" + this.ReportServer + "?" + objReport.Path.Replace("/", "%2f") + strFullParameters + "&rc:ReplacementRoot=" + this.Request.Url.AbsoluteUri + "?ServerUrl=' HEIGHT=100% WIDTH=100%></IFRAME></DIV>");

|||Thanks for your answer. Unfortunately, I still didn't manage to fix it. Can you give me some more explanation please? Thanks in advance.
cheers,
capsync|||

All the parameters needs to exist on the SRS report first( Create all possible parameters), but allow blanks. Unless you would like to add paramters dynamically which is a whole diffrent ball game. Then you could use only the ones required by sending the from the aspx page.

adding odbc to linked server

I have a odbc driver to a customized text file system (on aix), I am using
winsql to query from that sysem, does sql server allow to register it as a
linked server. I am guessing i should be able to register as i have odbc
driver to the system. Please suggest me if it is possible
When i try to do it, i am getting message like i mentioned here
Error 7399: OLE DB provider 'MSDASQL' reported an error.
Client unable to establish connection error:1408F0C6:SSL3_GET_RECORD: packet
length too long]
OLE DB error trace [OLE/DB Provider 'MSDASQL' IDBInitialize::Initialize
returned 0x80004005: ],
Thanks,
Subbu.
Hi
ODBC datasource use the MSDASQL provider and require a System DSN. This
error can occur if it is a User DSN. See sp_addlinkedserver in books online
for an example. If you still have problems you may want to turn ODBC tracing
on to debug it.
John
"Subbaiahd" <subbaiahd@.hotmail.com> wrote in message
news:%23XK7zvYzEHA.576@.TK2MSFTNGP14.phx.gbl...
>I have a odbc driver to a customized text file system (on aix), I am using
> winsql to query from that sysem, does sql server allow to register it as a
> linked server. I am guessing i should be able to register as i have odbc
> driver to the system. Please suggest me if it is possible
> When i try to do it, i am getting message like i mentioned here
> Error 7399: OLE DB provider 'MSDASQL' reported an error.
> Client unable to establish connection error:1408F0C6:SSL3_GET_RECORD:
> packet
> length too long]
> OLE DB error trace [OLE/DB Provider 'MSDASQL' IDBInitialize::Initialize
> returned 0x80004005: ],
>
> Thanks,
> Subbu.
>
|||There is a problem with odbc driver after i corrected it i am able to
register as linked server without any errors, and i can see the list of
tables in enterprise manager, but I am not able to run a query , see below
for error message i am getting for a query
query:
select CLI_ID, CLIENT_NAME
from IMPACT...CLI WHERE CLI_ID = '00001083'
error message:
Server: Msg 306, Level 16, State 1, Line 1
The text, ntext, and image data types cannot be compared or sorted,
except when using IS NULL or LIKE operator.
Can you please give me more details how to to turn ODBC tracing on to debug.
Thanks,
Subbu.
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:enu1IB0zEHA.1452@.TK2MSFTNGP11.phx.gbl...
> Hi
> ODBC datasource use the MSDASQL provider and require a System DSN. This
> error can occur if it is a User DSN. See sp_addlinkedserver in books
online
> for an example. If you still have problems you may want to turn ODBC
tracing[vbcol=seagreen]
> on to debug it.
> John
> "Subbaiahd" <subbaiahd@.hotmail.com> wrote in message
> news:%23XK7zvYzEHA.576@.TK2MSFTNGP14.phx.gbl...
using[vbcol=seagreen]
a
>
|||Hi
In the ODBC Data Source Manager applet (either in control panel or on
the Administrators menu), you will see a trace tab, with a button to
start tracing. At a guess CLI_ID is being interpreted as text, ntext
or image and that you need to specify CLI_ID LIKE '00001083'.
John
"Subbaiahd" <subbaiahd@.hotmail.com> wrote in message news:<u8FDdVX0EHA.2624@.TK2MSFTNGP11.phx.gbl>...[vbcol=seagreen]
> There is a problem with odbc driver after i corrected it i am able to
> register as linked server without any errors, and i can see the list of
> tables in enterprise manager, but I am not able to run a query , see below
> for error message i am getting for a query
> query:
> select CLI_ID, CLIENT_NAME
> from IMPACT...CLI WHERE CLI_ID = '00001083'
> error message:
> Server: Msg 306, Level 16, State 1, Line 1
> The text, ntext, and image data types cannot be compared or sorted,
> except when using IS NULL or LIKE operator.
>
> Can you please give me more details how to to turn ODBC tracing on to debug.
> Thanks,
> Subbu.
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:enu1IB0zEHA.1452@.TK2MSFTNGP11.phx.gbl...
> online
> tracing
> using
> a
|||I started odbc tracing , but could not understand the log. If you want to
see i can paste its contents but the log file size is 2MB. I dont have a
clue to proceed further, can you please help me.
Query:
select CLI_ID, CLIENT_NAME
from IMPACT...CLI WHERE CLI_ID like '00001083'
Error message:
Server: Msg 7356, Level 16, State 1, Line 1
OLE DB provider 'MSDASQL' supplied inconsistent metadata for a column.
Metadata information was changed at execution time.
OLE DB error trace [Non-interface error: Column 'CLI_ID' (compile-time
ordinal 1) of object 'CLI' was reported to have a DBCOLUMNFLAGS_ISLONG of
128 at compile time and 0 at run time].
"John Bell" <jbellnewsposts@.hotmail.com> wrote in message
news:3b81e6a.0411231254.6732697d@.posting.google.co m...
> Hi
>
> In the ODBC Data Source Manager applet (either in control panel or on
> the Administrators menu), you will see a trace tab, with a button to
> start tracing. At a guess CLI_ID is being interpreted as text, ntext
> or image and that you need to specify CLI_ID LIKE '00001083'.
> John
>
> "Subbaiahd" <subbaiahd@.hotmail.com> wrote in message
news:<u8FDdVX0EHA.2624@.TK2MSFTNGP11.phx.gbl>...[vbcol=seagreen]
below[vbcol=seagreen]
debug.[vbcol=seagreen]
This[vbcol=seagreen]
it as[vbcol=seagreen]
odbc[vbcol=seagreen]
error:1408F0C6:SSL3_GET_RECORD:[vbcol=seagreen]
IDBInitialize::Initialize[vbcol=seagreen]
|||Hi
Searching google for "OLE DB provider 'MSDASQL' supplied inconsistent
metadata for a column" turns up quite a few posts, so you can gain
solace in that you are not alone! If you ran the query without the
where clause does it still cause a problem? Suggestions from other
posts include using OPENQUERY or using the Oracle OLEDB driver
instead. This post points you to a KB article on the error and how you
may gain more information:
http://tinyurl.com/55x93
HTH
John
"Subbaiahd" <subbaiahd@.hotmail.com> wrote in message news:<#wvDYOk0EHA.3500@.TK2MSFTNGP09.phx.gbl>...[vbcol=seagreen]
> I started odbc tracing , but could not understand the log. If you want to
> see i can paste its contents but the log file size is 2MB. I dont have a
> clue to proceed further, can you please help me.
> Query:
> select CLI_ID, CLIENT_NAME
> from IMPACT...CLI WHERE CLI_ID like '00001083'
> Error message:
> Server: Msg 7356, Level 16, State 1, Line 1
> OLE DB provider 'MSDASQL' supplied inconsistent metadata for a column.
> Metadata information was changed at execution time.
> OLE DB error trace [Non-interface error: Column 'CLI_ID' (compile-time
> ordinal 1) of object 'CLI' was reported to have a DBCOLUMNFLAGS_ISLONG of
> 128 at compile time and 0 at run time].
>
> "John Bell" <jbellnewsposts@.hotmail.com> wrote in message
> news:3b81e6a.0411231254.6732697d@.posting.google.co m...
> news:<u8FDdVX0EHA.2624@.TK2MSFTNGP11.phx.gbl>...
> below
> debug.
> This
> online
> tracing
> using
> it as
> a
> odbc
> error:1408F0C6:SSL3_GET_RECORD:
> IDBInitialize::Initialize

Tuesday, March 6, 2012

Adding New Disk

When I add a new disk and put a file on it and make it part of the primary
filegroup, how does disk space get used? Basically my current disk is 90%
full and I'd like to relieve it as much as possible.
Files within a file group get filled in a proportion fill method that is
based of the amount of free space in the file. So if you added another file
some new data would be inserted into the existing one(s) but most would go
into the new one. At least until they leveled off. If your concerned it
would be best to BCP out all the data, truncate the tables, add the new
file(s) and then BCP the data back in. That would fill them more evenly and
place more of the existing data on the new files.
Andrew J. Kelly SQL MVP
"Brad" <me@.privacy.net> wrote in message
news:MPG.1b435bcab46a01ee98bdd7@.news...
> When I add a new disk and put a file on it and make it part of the primary
> filegroup, how does disk space get used? Basically my current disk is 90%
> full and I'd like to relieve it as much as possible.

Adding New Disk

When I add a new disk and put a file on it and make it part of the primary
filegroup, how does disk space get used? Basically my current disk is 90%
full and I'd like to relieve it as much as possible.Files within a file group get filled in a proportion fill method that is
based of the amount of free space in the file. So if you added another file
some new data would be inserted into the existing one(s) but most would go
into the new one. At least until they leveled off. If your concerned it
would be best to BCP out all the data, truncate the tables, add the new
file(s) and then BCP the data back in. That would fill them more evenly and
place more of the existing data on the new files.
Andrew J. Kelly SQL MVP
"Brad" <me@.privacy.net> wrote in message
news:MPG.1b435bcab46a01ee98bdd7@.news...
> When I add a new disk and put a file on it and make it part of the primary
> filegroup, how does disk space get used? Basically my current disk is 90%
> full and I'd like to relieve it as much as possible.

Adding New Disk

When I add a new disk and put a file on it and make it part of the primary
filegroup, how does disk space get used? Basically my current disk is 90%
full and I'd like to relieve it as much as possible.Files within a file group get filled in a proportion fill method that is
based of the amount of free space in the file. So if you added another file
some new data would be inserted into the existing one(s) but most would go
into the new one. At least until they leveled off. If your concerned it
would be best to BCP out all the data, truncate the tables, add the new
file(s) and then BCP the data back in. That would fill them more evenly and
place more of the existing data on the new files.
--
Andrew J. Kelly SQL MVP
"Brad" <me@.privacy.net> wrote in message
news:MPG.1b435bcab46a01ee98bdd7@.news...
> When I add a new disk and put a file on it and make it part of the primary
> filegroup, how does disk space get used? Basically my current disk is 90%
> full and I'd like to relieve it as much as possible.

Adding new column to Flat File Source Connection

What is the best way to deal with a flat file source when you need to add a new column? This happens constantly in our Data Warehouse, another field gets added to one of the files to be imported, as users want more data items. When I originally set the file up in Connection Managers, I used Suggest File Types, and then many adjustments made to data types and lengths on the Advanced Tab because Suggest File Types goofs a lot even if you say to use 1000 rows. I have been using the Advanced Tab revisions to minimize the Derived Column entries. The file is importing nightly. Now I have new fields added to this file, and when I open the Connection Manager for the file, it does not recognize the new columns in the file unless I click Reset Fields. If I click Reset Fields, it wipes out all the Advanced Tab revisions! If I don't click Reset Fields, it doesn't seem to recognize that the new fields are in the file?

Is it a waste of time to make Advanced Tab type and length changes? Is it a better strategy to just use Suggest Types, and not change anything, and take whatever you get and set up more Derived Column entries? How did the designers intend for file changes to be handled?

Or is there an easy way to add new fields to this import that I am overlooking? I am finding it MUCH more laborious to set up or to modify a file load in SSIS than in DTS. In DTS, I just Edit the transformation, and add the field to the Source and Destination lists, and I'm good to go. My boss isn't understanding why a "better" version is taking so much more work!

thanks,

Holly

Ah, well, I have some sympathy for you, but as soon as you said Data Warehouse, I lost that sympathy. The flat file source CANNOT change. SSIS isn't meant to be a be-all/do-all application. There are many downstream metadata components that would surely break if the flat file source could somehow magically understand the new columns. The whole point of SSIS (and many die-hard DTS people will balk at this) is to enforce strict-adherence to the way connections are built. Any changes require developer effort to ensure that the changes are handled appropriately. (No guessing!)

Suggest data types is good for a starting point, but you'll still need to go through and make sure that they are correct. The suggest feature does not scan all rows.

The more important question is why are you (the maintainers of the Data Warehouse) allowing for a constantly changing file to be introduced into the system? There should be some sort of production control in place.|||Exactly for the reasons Phil says, I don't use the "flat file" connection in SSIS for imports unless really, really needed.

If it is a standard delimited file, I use a temporary table in the same field structure as the source and the field names for the destination. Use Bulk Insert or BCP to insert into the temp table, then use SSIS to transform/move the data to the ultimate destination. In most cases use
INSERT INTO dest SELECT * from temptable. In other cases, I have to write a dynamic sql statement to read and populate the matching field names between the two tables and insert them.

That way if a new field is added, all you have to do is change the temp table and the dest table. Nothing in SSIS needs to change.

Yes, I know it is slower. But the ease of changing far outweights the added time it takes.|||

Sorry, I obviously did not explain clearly. The files are not "constantly changing."

Business users submit requests for data warehouse additions. Our data warehouse has expanded constantly over the years. New tables, new fields in existing tables, etc. When these requests are approved, the changes have to be implemented. If the data comes from the mainframe, and if it is logically part of an existing download, the programmer adds the new field/s to the appropriate download creating the extract file. Then the data warehouse administrator has to modify the package to import the new data fields. As businesses change, new regulations, new situations, new data is needed.

I don't mean that one night, suddenly, with no warning, there are new fields in a file. But I'd love to have a dollar for every field I have added to an existing DTS import over the years.

We use DTS for our ETL. I am now converting it to SSIS, but only in the Test environment. So far I have mostly been creating the SSIS packages, but since it takes a while, and other work goes on, I am coming to situations where I have to modify existing SSIS packages that I have already converted, because I have changed the DTS packages they are converting from.

I am beginning to think that I took the wrong strategy to change types and lengths on Advanced Tab, because I can't see any way of adding another field or two, without wiping out all that work. That was the purpose of my question. What are others in this situation using as their approach.

Thank you for your reply and I would be glad to hear other opinions.

Holly

|||

Tom,

Yes, I can see your point. The ideal thing would just be to let the import file define the file structure of that temporary or prep or stage table, where ever you want to dump the data. I am importing, in most cases, into a "prep" table from the text file. However, the approach to setting up the data structure of the prep tables was defined in 1999 and 2000, and the stored procedures that load from them are expecting that structure, and there are hundreds of tables, and I really don't have time to change them and then change the stored procs to deal with the new situation..... It seems to be an example of, if you set up a new ETL using SSIS it would be done very differently from an ETL that was set up under older versions. The conversion requires fixing something, and it's sort of a question of what you decide to fix.

It is interesting to hear the different ways people do things.

Holly

|||

I think a general "best practice" kind of approach to loading files is to always load them to a staging table. The goal of this initial load is that it should contain virtually no logic - it should be so simple that it virtually never fails.

A more drastic way of doing this is to use a "generic loader" setup. This consists of one staging table with enough varchar columns (max size) to match the number of columns in your widest table, plus a "target_table_name" column. You can create a generic file import process that will load every record in every file to this structure, setting the target table column differently for each load. You will have to maintain a mapping of your generic column names to the specific column names of the target table, or you can double up on your columns and add a "Column name" column for each column.

You can change datatypes, etc. when loading the targets from the generic table. This makes it a bit easier to recover from errors due to datatype issues, which is especially common when loading user-maintained Excel files, for example.

Obviously there are some drawbacks to this approach, like the mapping confusion it can create, as well as the fact that your loads tend to get pretty linear in execution. If you try to parallel load a setup like this the table gets thrashed too much and performance suffers. However, these tradeoffs do lead to a lot more flexibility and you have fewer ETL objects to maintain, though the maintenance may be more difficult in some cases.

There are always tradeoffs, but thought I'd throw the idea it out in case it might fit in your situation.

|||Holly,

I know it is a bunch of work to do to reimplement, I too have hundreds of flat files to import. I started this approach a long time ago because I could see this was going to be a problem.

I shortened part of my process in my description. I actually have a stored proc importing the flat files into a temp (staging) database with the table names the same as the flat files. I use a loop for each table, import each flat file (tablename.txt) into the temp table and process the tables from "import" database into their ultimate destination.

In your case, if you take my approach you could move them into the stage tables in a stage database, and then insert into your existing "prep" tables for the rest of the processing by the stored proc.

Neither SSIS or DTS are pretty when it comes to changing file structures for importing. Especially when all you usually need to do is map txtfile.field123 to table.field123.

Good luck|||You may try DataDefractor. It handles input data change gracefully.

Regards,
Ivan