Como hacer para que Report document me tome un dataset con muchas tablas?

LLeno el data set con esa consulta, después lo llamo.. Me carga el reporte, pero el setdatasource no me hace el order by de la consulta. Pero si el where ya probé mil formas y no lo puedo hacer andar. Pregunta hay alguna otra forma de mandarle una consulta al reporte de crystal Report?

Consulta del data set

string sql = "SELECT * ,(cli_cuit1 + '-' + cli_cuit2 + '-' + cli_cuit3) as CUIT " +                     " FROM Cliente LEFT JOIN Localidad  on " +                     " cli_codpos1 = loc_cod1 and cli_codpos2 = loc_cod2 " +                     " LEFT join Estado on est_codigo = cli_estado " +                     " Left join Zona on cli_zona = zon_codigo " +                     " Left join Vendedor on cli_vendedor = ven_codigo " +                     " Left join Actividad on cli_actividad = act_codigo " +                     " Left join Categoria on cli_categoria = cat_codigo " +                     " Left join CATIIBB on cli_catIIBB = cib_codigo " +                     " Left join CondIVA on cli_iva = iva_codigo " +                     " left join Formpag on cli_formpag = for_codigo " +                     " left join Tipcta on cli_condvta = tip_codigo " +                     " left join Concepto on cli_concepto = con_codigo " +                     " Left join Proveedor on cli_proveedor = prv_codigo " +                     " Left join EstCivil  on cli_estcivil = est_codigo " +                     " Where Cliente.cli_codigo > 5000 ORDER BY Cliente.cli_codigo ASC"; 

Metodo Carga reporte

public void repo()     {         ReportDocument rpt = new ReportDocument();         clsClientes objclsCliente = new clsClientes();         ConnectionInfo cnn = new ConnectionInfo();         TableLogOnInfos crtablelogoninfos = new TableLogOnInfos();         TableLogOnInfo crtablelogoninfo = new TableLogOnInfo();         Tables CrTables;         cnn.ServerName = @"NomServer";         cnn.DatabaseName = "Nombase";         cnn.UserID = "sa";         cnn.Password = "****";         rpt.Load(Application.StartupPath + "\ClientesResum.rpt", OpenReportMethod.OpenReportByTempCopy);         CrTables = rpt.Database.Tables;         foreach (Table CrTable in CrTables)         {             crtablelogoninfo = CrTable.LogOnInfo;             crtablelogoninfo.ConnectionInfo = cnn;             CrTable.ApplyLogOnInfo(crtablelogoninfo);         }         rpt.SetDataSource(objclsCliente.GetAllCliente()); // Traigo el DATASET por un metodo         frmReporteViewer llamar = new frmReporteViewer();         llamar.ReportViewer.ReportSource = rpt;         llamar.ReportViewer.Refresh();         llamar.Show();     } 

Witch is the best way to create a photo dataset? [en espera]

I am working in a startup and we need to create a dataset with multiple photos in order to train our neural networks. We would need to get photos from Amazon and similar websites.

Do you know any tool or any technique (even if is necesary programm some code) to automatizate this task?

We know about kaggle and similar repositories of data, those datasets are usefull but not enough.

Thank you for your help.

Problema com DataSet DataGridView [C#]

Tudo bem?

Estou tendo problemas ao popular um DataGridViewcom os dados do DataSet.

No meu button1 eu realizo a busca, leitura e importação para o DataGriView de todos arquivos xml de um determinado diretório. Até aí funciona normal, os dados são exibidos no DataGridView.

  DataSet dataSet = new DataSet();          try         {             string[] array2 = Directory.GetFiles("temPFiles", "*.xml");               foreach (string name in array2)             {                 dataSet.ReadXml(name);                 dtGridViewImportacao.DataSource = dataSet.Tables[9];             }               try             {                 prgsbarImportacaoXML.BeginInvoke(new Action(() => {                     prgsbarImportacaoXML.Style = ProgressBarStyle.Marquee;                 }));                  prgsbarImportacaoXML.BeginInvoke(new Action(() => {                     prgsbarImportacaoXML.MarqueeAnimationSpeed = 5;                 }));                  lblStatus.BeginInvoke(new Action(() =>                 {                     lblStatus.Text = "     Lendo arquivos XML individualmente...";                 }));               }             catch (Exception exImportarArqXML)             {                  MessageBox.Show("Não foi possível realizar a importação do conteúdo do arquivo XML e salvá-lo. Certifique-se de que o mesmo possui uma estrutura correta e válida para um XML de compra. A aplicação está sendo encerrada. Certifique-se de que o arquivo selecionado remete-se a um XML associado a uma Nota Fiscal de COMPRA!\n\n" + exImportarArqXML.Message, "ERRO!", MessageBoxButtons.OK, MessageBoxIcon.Error);                 this.Close();             }            }         catch (Exception exLeituraXML)         {              MessageBox.Show("Erro ao realizar a leitura individual dos arquivos XML obtidos!\n\n" + exLeituraXML.Message, "ERRO!", MessageBoxButtons.OK, MessageBoxIcon.Error, MessageBoxDefaultButton.Button1);             Close();         } 

inserir a descrição da imagem aqui

Porém no button2, ao remover algumas colunas que não necessito, ocorre o seguinte erro:

‘InvalidArgument=Value ‘1’ não é um valor válido para ‘index’. Arg_ParamName_Name’

na seguinte parte do código:

 this.Invoke(new Action(() => lblStatus.Text = "  Reorganizando colunas desnecessárias para leitura..."));          dtGridViewImportacaoXML.Columns.RemoveAt(1); //ERRO AQUI         dtGridViewImportacaoXML.Columns.RemoveAt(2); //ERRO AQUI           dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("CFOP");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));           dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("uCom");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));           dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("qCom");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));          dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("vUnCom");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {             }         }));           dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("vProd");               }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));          dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("cEANTrib");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));           dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("uTrib");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));          dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("qTrib");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));          dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("vUnTrib");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));           dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("vDesc");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));           dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("indTot");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));           dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("CEST");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));          dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("indEscala");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));          dtGridViewImportacaoXML.BeginInvoke(new Action(() => {              try             {                 dtGridViewImportacaoXML.Columns.Remove("xPed");              }             catch (System.ArgumentException)             {              }              catch (Exception exCPROD)             {              }          }));          dtGridViewImportacaoXML.BeginInvoke(new Action(() => {             try             {                  dtGridViewImportacaoXML.Columns.Remove("nFCI");              }             catch (System.ArgumentException)             {              }               catch (Exception exCPROD)             {              }         }));             //lblStatus.BeginInvoke(new Action(() =>         //{         //    lblStatus.Text = "     Renomeando colunas necessárias para leitura...";         //}));          //dtGridViewImportacaoXML.BeginInvoke(new Action(() =>         //{         //    try         //    {         //        dtGridViewImportacaoXML.Columns[0].HeaderText = "Cód. Barras";         //    }         //    catch (Exception exCodBarras)         //    {          //        MessageBox.Show("Não foi possível encontrar a coluna referente ao Código de Barras do(s) produto(s). A aplicação está sendo encerrada. Certifique-se de que o arquivo selecionado remete-se a um XML associado a uma Nota Fiscal de COMPRA!\n\n" + exCodBarras.Message, "ERRO!", MessageBoxButtons.OK, MessageBoxIcon.Error, MessageBoxDefaultButton.Button1);         //        Application.Exit();         //    }          //}));          //dtGridViewImportacaoXML.BeginInvoke(new Action(() =>         //{         //    try         //    {         //        dtGridViewImportacaoXML.Columns[1].HeaderText = "NCM's";         //    }         //    catch (Exception exNCM)         //    {          //        MessageBox.Show("Não foi possível encontrar a coluna referente ao NCM do(s) produto(s). A aplicação está sendo encerrada. Certifique-se de que o arquivo selecionado remete-se a um XML associado a uma Nota Fiscal de COMPRA!\n\n" + exNCM.Message, "ERRO!", MessageBoxButtons.OK, MessageBoxIcon.Error, MessageBoxDefaultButton.Button1);         //        Application.Exit();         //    }          //})); 

Parece ser como se não houvessem as colunas.

Então no meu button3, que seria a exportação dos registros para um arquivo de texto, também não acontece nada. Mesmo existindo os registros no datagrid, nada é exportado.

for (int i = 0; i < dtGridViewImportacaoXML.Rows.Count - 1; i++)         {             using (System.IO.StreamWriter file =             new System.IO.StreamWriter(@"DataBase\db.txt", true))             {                  try                 {                     file.WriteLine(dtGridViewImportacaoXML.Rows[i].Cells[0].Value.ToString() + "," + dtGridViewImportacaoXML.Rows[i].Cells[1].Value.ToString());                  }                 catch (Exception exLerLinha)                 {                      MessageBox.Show("Não foi possível realizar a leitura individual das linhas do arquivo XML. Certifique-se de que o mesmo possui uma estrutura correta e válida para um XML de compra. A aplicação está sendo encerrada. Certifique-se de que o arquivo selecionado remete-se a um XML associado a uma Nota Fiscal de COMPRA!\n\n" + exLerLinha.Message, "ERRO!", MessageBoxButtons.OK, MessageBoxIcon.Error);                     this.Close();                 }               }          } 

O curioso é que eu utilizei os seguintes MessageBox nos eventos Click's de cada button para verificar a quantidade de registros e colunas, ambos aparecem valores maiores que zero.

MessageBox.Show(dtGridViewImportacao.Rows.Count.ToString()); // QUANTIDADE DE REGISTROS É MAIOR QUE 0   MessageBox.Show(dtGridViewImportacao.Columns.Count.ToString()); // QUANTIDADE DE COLUNAR É MAIOR QUE 0 

Alguém saberia me dizer onde estou errando, por gentileza? Desde já agradeço a atenção de todos!

NoneType object is not iterable while iterating on image and mask dataset

I am trying to plot dataset using matplotlib

for i in range(5): image, mask = dataset[np.random.randint(0,len(dataset))] plot2x2Array(image, mask)

in () 1 for i in range(5): —-> 2 image, mask = dataset[np.random.randint(0, len(dataset))] 3 plot2x2Array(image, mask)

TypeError: ‘NoneType’ object is not iterableenter code here

Time series on a small dataset. Also application of ADF is not happening

Hi I am new in R and was tring to convert dataframe into a time series object but after applying groupby on a certain index the datatype gets changed to “tbl_df” “tbl” “data.frame” format. Also i am trying to make a another dataframe subset out of an existing dataframe which is returning null. Also after converting the dataframe into a time series object its converting to ts matrix. Can you please let me know why all this issues are happening?

I have tried all the basic operations but somehow missing the background interpretation of all the codes used in the code. Kindly help

data <- read.csv("Time_Series_Data_Peak2.csv") head(data) class(data) #Groupby library(dplyr) Dates_class = data %>%    group_by(Date) %>%    summarise(Dates_class= sum(Calls_Handled)) View(Dates_class) head(Dates_class) plot(Dates_class$  Date,Dates_class$  Dates_class) lines(Dates_class$  Date,Dates_class$  Dates_class) class(Dates_class) Dates_class1 <- ts(Dates_class,start=c(2019,3),end=c(2019,5),frequency=1)  I want the data to be ready for checking stationarity. 

I want to select a pair of columns with 1 fixed column & other is any one from the dataset using R

I had a datset with 5 columns. I want to select 4 pairs of data & store them under 4 names. However I want to take 1st column for all these pairs & the other column of the pair will be anyone of the other 4 columns.

Target Var1 Var2 Var3 Var4 1 1167 56130 0.591 0.248 0 1677 47681 0.425 0.875 1 603 22006 0.462 1.401 1 489 68545 0.348 0.869 0 1479 38670 0.919 1.678 1 976 15307 0.268 1.056 1 1509 53761 0.81 1.76 1 1093 54701 0.875 1.03 0 648 68620 0.728 1.013 1 1501 58637 0.428 0.651 0 308 54036 0.814 1.084 1 1609 86235 0.136 1.29 1 817 29216 0.422 0.177 1 460 70500 0.912 1.654 1 1190 89207 0.397 0.191 1 1128 40301 0.771 1.08

How do I sync my Dataset in Visual Studio with SQL Server Datatype Changes without losing custom queries?

I know that similar questions have been asked, but I haven’t been able to get an exact answer.

I have a Dataset.xsd item in Visual studio that is based on SQL Server tables. If make lots of SQL Server column changes (data length, null to not null, new columns, varchar to nvarchar), how can I run the Custom Tool without it potentially wiping out my TableAdapter’s existing custom queries? Or is there another way to sync them? I can manually modify the table columns in the XSD file, but it’s getting tedious if I have hundreds to make, And I am also afraid I might miss one or two.

BTW, I am currently using VS2017, SQL 2016

R invalid subscript type ‘list’ in one dataset but not another that is class spec_tbl_df

I have two datasets with lots of columns (55 in dataset 1 and 42 in dataset two). Each of them have tons of rows that start with “dist_” and I’m trying to get the code to identify which column has the smallest distance and place that location. The both look like this, but dataset 1 has more demographic info than the second one.

ilt_city__state_country dist_antwerp dist_bangkok dist_beijing   <chr>                          <dbl>        <dbl>        <dbl> 1 tokyo na jpn                   5855.        2866.     1.31e+ 3 2 tokyo na jpn                   5855.        2866.     1.31e+ 3 3 beijing na chn                 4931.        2049.     1.32e-12 4 singapore na sgp                 NA           NA     NA        5 so paulo na bra                6025.       10194.     1.09e+ 4 

So the code that I have to identify the closet location looks like this:

Enrollment_Report$  nearest_hub <- hub_locations_list$  hub_loc[apply(Enrollment_Report[grep("^dist", names(Enrollment_Report))], 1, which.min)] 

And it gives this error:

Error in hub_locations_list$  hub_loc[apply(Enrollment_Report[grep("^dist",  :    invalid subscript type 'list' 

But what’s weird is when I use essentially the same code on the second dataset, it works seamlessly:

all_locations$  nearest_hub <- hub_locations_list$  hub_loc[apply(all_locations[grep("^dist", names(all_locations))], 1, which.min)] 

I’ve tried using unlist(), but that doesn’t work. I also tried looking up the class of the datasets, and this is what I get:

> class(Enrollment_Report) [1] "tbl_df"     "tbl"        "data.frame" > class(all_locations) [1] "spec_tbl_df" "tbl_df"      "tbl"         "data.frame" 

I’ve never heard of a spec_tbl_df, but that’s the only difference that I can find between these two datasets. Is there a way to make the first one also spec_tbl_df, and what is that?

Thank you!

MySQL query return breaks in dataset

My IoT slave devices send incrementally numbered packet_ids to a Raspberry Pi that logs the packets in a mySQL table.

I want to run a script that checks for breaks in the packet_ids per device, returns the device|packet_id after the break representing missed messages.

I know basic querying but the syntax of this function is beyond me. Any help is greatly appreciated.

device_id packet_id
1 1
1 2
1 3
1 4
1 5
1 6
1 7
1 9
1 10
2 5
2 6
2 7
2 8
2 10

In the db excerpt, 1|9 & 2|10 would be returned.