Need to Convert numpy ArrayS to a single DatasetV1Adapter

As of now, I have generated numpy arrays for X and y(data and labels) and am trying to feed them to thisisRon’s cGAN code

Part of his code(designed for MNIST) is

 for images, labels in train_dataset:             gen_loss, disc_loss = train_step(images, labels)              total_gen_loss += gen_loss             total_disc_loss += disc_loss 

From what I saw from his code, train_dataset is a datasetV1adapter(for MNIST). Running this code with train_dataset gave me the error as shown below

too many values to unpack (expected 2) 

It looks to stem from my input being a nmpy Array X rather than a datasetV1Adapter

To work around this, I tried to design my own training loop instead

length=len(train_dataset)  for i in range(0,length):           images=X_tiny[i]           labels=y_labels_one_hot[i]           gen_loss, disc_loss=train_step(images,labels)           total_gen_loss +=gen_loss           total_disc_loss +=disc_loss 

This gives me an error of

Input 0 of layer dense_49 is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [10] 

Which stems from the same point, although I don’t truly know what it means. Or why it is showing up

I strongly believe that I need to use a datasetV1Adapter instead, but unsure how to do it. Thanks in advance

Edit, I assumed the error was purely from a non dataset V1 adapter issue.

I did:

train_dataset =, y_one_hot)) train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE) 

Giving me:

<BatchDataset shapes: ((None, 10, 10, 3), (None, 10)), types: (tf.uint8, tf.float32)>  
Dimension 0 in both shapes must be equal, but are 100 and 32. Shapes are [100] and [32]. for 'generator_24/concat' (op: 'ConcatV2') with input shapes: [100,256], [32,256], [] and with computed input tensors: input[2] = <-1>. 

I cannot find any place where I have given 32 as an input. So I am very confused