WP_Query & Duplicate entries

I am currently using this code in order to get the latest three posts of my website, successfully and with no issues:

$  posts = get_posts( array( 'numberposts' => 3 ) ); 

but I want to switch to wp_query.

I tried a simple wp_query call in order to show tha latest 3 articles posted on my website:

$  args = array( 'posts_per_page' => 3 ); $  posts = new WP_Query( $  args ); 

And also used this PHP code in order to provide an output of this wp_query:

<?php  while ($  posts -> have_posts()) : $  posts -> the_post(); foreach( $  posts as $  p ): ?> <?php the_title(); ?><br> <?php endforeach; endwhile; ?> 

But in the end it failed. Instead of having an output list of:

First post title
Second post title
Third post title

I do get an output list of:

First post titleFirst post titleFirst post titleSecond post titleSecond post titleSecond post titleThird post titleThird post titleThird post title

Performance for creating millions of database entries

This software needs to generate potentially millions of database entries. These entries contain generated codes and generated number series. Codes are sent out to end users to redeem and number series are used internally for drawings.

class Generator {     public function createCodes($  count, $  project, $  generator)     {         $  batchSize = 250;         $  batches = ceil($  count / $  batchSize);          $  sharedColumns = [             'timestamp' => time(),             'parent_id' => $  project->id,             'source' => $  generator->name,             'generator_id' => $  generator->id,         ];          $  created = 0;         for ($  batch = 0; $  batch < $  batches; $  batch++) {             $  size = min($  batchSize, $  count - $  created);              $  time = time();             $  inserts = [];             for ($  i = 0; $  i < $  size; $  i++) {                 $  parentCode = self::generateCode();                 $  inserts[] = [                     'type' => 'parent',                     'parent_code' => '',                     'code' => $  parentCode,                     'series' => self::generateSeries(),                 ];                  for ($  j = 0; $  j < $  generator->children; $  j++) {                     $  inserts[] = [                         'type' => 'child',                         'parent_code' => $  parentCode,                         'code' => self::generateCode(),                         'series' => self::generateSeries(),                     ];                 }             }              $  columns = array_keys($  sharedColumns + $  inserts[0]);             $  valueString = '(' . implode(', ', array_fill(0, count($  columns), '?')) . ')';             $  queryString = 'INSERT INTO my_table (' . implode(',', array_map('Database::quoteIdentifier', $  columns)) . ') VALUES ';             $  queryString .= implode(',', array_fill(0, count($  inserts), $  valueString));              // Flatten all entries and add shared columns             $  values = array_reduce($  inserts, function ($  carry, $  item) use ($  sharedColumns) {                 return array_merge($  carry, array_values($  sharedColumns), array_values($  item));             }, []);              $  query = \Database::getInstance()->prepare($  queryString);             $  query->execute($  values);              $  created += $  size;         }     }      static $  codeChars = 'ABCDEFGHJKLMNPQRTUVWXY34789';      protected static function generateCode()     {         return substr(str_shuffle(self::$  codeChars), 0, 9);     }      protected static function generateSeries()     {         $  numbers = range(1, 30);         shuffle($  numbers);         $  numbers = array_slice($  numbers, 0, 5);          return implode(',', $  numbers);     } } 

It doesn’t guarantee uniqueness yet, which it has to have for both the codes and number series. My plan is to select all duplicates for code and series (after generation and inserts) and just generate new ones until there are no duplicates anymore. That way I don’t have to check each of the million generated codes for uniqueness.

I tested this with a count of 1 million, which took 163s to complete and according to memory_get_peak_usage(true) consumed 4MB of RAM (which I find unlikely, but ok).

I experimented with different batch sizes but the gain of fewer queries seemed to pretty much cancel out with the additional function calls and higher array sizes for the array reduction.

Is there anything I can do to decrease execution time? I don’t expect a million entries to be generated in 10 seconds, but if there’s any gain to be had I’d appreciate it.