Can I “High Five” my duplicates from the Mirror Image spell?

The text for Mirror Image says:

the duplicates move with you and mimic your actions […] If an attack hits a duplicate, the duplicate is destroyed.

Does this mean that my warlock can cast Mirror Image, slay an opponent in the first two rounds, then spend the remaining eight rounds giving himself one high five after another? Then wave goodbye to the three images when the spell duration expires normally?

Or does one high five from the caster destroy his own duplicate? One destroyed image with each flat palm slap?

Assume that at the end of round two, no opponent has hit a duplicate, and the three original duplicates are still present when celebratory hand gestures commence.

Why is my view selecting hundreds of duplicates?

This view selects 696 entries. The CSV file has 48 entries.

CREATE OR REPLACE VIEW insert_3_char_abts AS SELECT     ext.construct_id,     n_term,     enz_name,     c_term,     cpp,     mutations,     ext.g_batch,     ext.p_batch,     emptycol,          c_batch,     abts5_mean,     abts5_SD,     abts5_n,     abts5_method,     abts5_study_id,     abts7_mean,     abts7_SD,     abts7_n,     abts7_method,     abts7_study_id,     pur.pk_purified_enz_id FROM EXTERNAL ((        construct_id NUMBER(10),       n_term VARCHAR2 (50),       enz_name VARCHAR2 (50),       c_term VARCHAR2 (50),       cpp VARCHAR2 (50),       mutations VARCHAR2 (50),       g_batch VARCHAR2 (50),       p_batch VARCHAR2 (50),       emptycol VARCHAR2(50),        c_batch VARCHAR2 (50),       abts5_mean NUMBER (5, 2),       abts5_SD NUMBER (5, 2),       abts5_n NUMBER (3),       abts5_method VARCHAR2 (50),       abts5_study_id VARCHAR2 (8),       abts7_mean NUMBER (5, 2),       abts7_SD NUMBER (5, 2),       abts7_n NUMBER (3),       abts7_method VARCHAR2 (50),       abts7_study_id VARCHAR2 (8))            TYPE ORACLE_LOADER     DEFAULT DIRECTORY data_to_input     ACCESS PARAMETERS (         RECORDS DELIMITED BY NEWLINE          SKIP 1         BADFILE bad_files:'badflie_view_before_insert_char_abts.bad'         FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'         MISSING FIELD VALUES ARE NULL          )      LOCATION ('CHAR_ABTS.CSV')     REJECT LIMIT UNLIMITED) ext  INNER JOIN purified_enz pur ON ext.p_batch = pur.p_batch INNER JOIN produced pr  ON pr.pk_produced_id = pur.fk_produced_id; ; 

If I finish this statement with

AND pr.fk_construct_id = ext.construct_id; 

It selects 46 out of 48 records, which is better, but not great.

List of tuples without duplicates & repeated values

Given some number nand set of values vals, I want to obtain all the tuples/permutations of size n for the values in vals, but without any repeated tuples. So e.g. n=2 and vals={3,6} should give

 n = 2, vals = {0,1}   --> { {0,0}, {0,1}, {1,1} }  n = 2, vals = {0,1,2} --> { {0,0}, {0,1}, {0,2}, {1,1}, {1,2}, {2,2} }  n = 3, vals = {0,1}   --> { {0,0,0}, {0,0,1}, {0,1,1}, {1,1,1} }  n = 3, vals = {0,1,2} --> { {0,0,0}, {0,0,1}, {0,0,2}, {0,1,1}, {0,1,2}, {0,2,2}, {1,1,1}, {1,1,2}, {1,2,2}, {2,2,2} } 

I’ve tried the following commands:

 n    = 2;  vals = {0, 1};  Tuples[vals, {n}]        (* gives { {0, 0}, {0, 1}, {1, 0}, {1, 1} } *)  Permutations[vals, {n}]  (* gives { {0, 1}, {1, 0} } *)  Subsets[vals, {n}]       (* gives { {0, 1} } *) 

Permutations and Subsets are incomplete. Tuples contains all the right combinations, but also contains duplicates like {0, 1} and {1, 0}. Since I do not care about order, I’d like to remove those.

How do I achieve the behavior of Tuples, but without duplicates?

Trees with duplicates and path cancellations

I have a bunch of objects that I am not sure on how to represent in order to maximize memory occupancy and possibly avoiding large CPU overhead. The most natural way to see it is as a tree. A picture is worth a thousand words enter image description here As you can see, at each layer there are many nodes that are equal. Each tree node contains an object that I simply represented as an integer. Also, between each node of the tree, there are a bunch of other objects, represented by the $ L_x$ values on the arrows; all the $ L_x$ values between the same pair of layers are equal.

In an effort to improve memory occupancy, I simplified the data structure using a linked list, like show in the image belowenter image description here

So, each node of the new list can be of two types, let’s say Simple and Aggregate. An Aggregate node contains the tree nodes, each of which may be connected to one or more other tree nodes. It also contains a label, which is not shown in the picture.

Problem

There may be cancellation on the tree depending on specific parameters of the tree node. To put it simply, if and only if two aggregate nodes have the same label, a cancellation between the inner tree nodes may happen. For example, it may happen that nodes 1 and 8 of the tree node has the same label and cancels-out: the path leading from 1 to 8 must be deleted (so both 1-4-8 and 1-6-8). However, how can I keep track of this cancellation in the new linked list?

Can you cast Mirror Image twice on successive turns and have 6 duplicates of yourself instead of 3?

Mirror image (PHB, pg. 260) is a spell that has a casting time of 1 action, and a duration of 1 minute. My question is, can you cast Mirror image twice on successive turns, gaining 6 images by turn 2?

Sample Turns:

Turn 1: Mirror Image (3 images)

Turn 2: Mirror Image again (6 images, with 3 of them lasting 1 turn less)

Since Mirror Image doesn’t require concentration, there should be no interference between the 2 casts in terms of concentration. I know that this could be up to DM Fiat, but I was wondering if there was a proper interpretation of the printed rules for this situation.

Generate only unique combinations when input contains duplicates

I have a list with repeated elements, such as

list = {a, a, b, c, c, c}

and I’d like a list of the unique ways to choose 3 elements from it:

{{a, a, b}, {a, a, c}, {a, b, c}, {a, c, c}, {b, c, c}, {c, c, c}} 

Alas, “unique” means two different things at once in that sentence, and I can’t figure out how to achieve both types of uniqueness simultaneously.

I could use Permutations, whose documentation indicates regarding the input that

Repeated elements are treated as identical.

But I will have many results that differ only by rearrangement, and I do not care about order:

Permutations[list, {3}]

{{a, a, b}, {a, a, c}, {a, b, a}, {a, b, c}, {a, c, a}, {a, c, b}, {a, c, c}, {b, a, a}, {b, a, c}, {b, c, a}, {b, c, c}, {c, a, a}, {c, a, b}, {c, a, c}, {c, b, a}, {c, b, c}, {c, c, a}, {c, c, b}, {c, c, c}} 

To eliminate the rearrangements, I could try using Subsets instead, but per its documentation,

Different occurrences of the same element are treated as distinct.

As a result I get many duplicate results that I don’t want due to the repeated elements of list:

Subsets[list, {3}]

{{a, a, b}, {a, a, c}, {a, a, c}, {a, a, c}, {a, b, c}, {a, b, c}, {a, b, c}, {a, c, c}, {a, c, c}, {a, c, c}, {a, b, c}, {a, b, c}, {a, b, c}, {a, c, c}, {a, c, c}, {a, c, c}, {b, c, c}, {b, c, c}, {b, c, c}, {c, c, c}} 

[Frustrated aside: I can’t begin to imagine why Mathematica’s permutations-generating function treats repeated list items differently than its combinations-generating function.]

I could eliminate the duplicates from either result, but either way, that still requires calculating the full list of nonunique results as an intermediate step, which I expect to be many orders of magnitude longer than the unique results.

Is it possible to get the result I’m after without having to cull a humongously longer list first to get there?

Time complexity of removing duplicates in lists


Question

I am wondering what is the minimum time complexity of get the unique value of a array in two conditions: keep the order or not.

I think time complexity of not keeping the order is $ O(n)$ using a hashtable. Keeping order has a time complexity $ O(n^2)$ .

So, Am I right? Can someone give a detailed prove the time complexity of the best in each condition?

For more infomation:

I use Python, I find 2 python code for drop dumplicates in a list.

  1. How do you remove duplicates from a list whilst preserving order?
  2. removing duplicates in lists

finding duplicates and updating a column with values according with number of duplicates using mysql stored procedure

I am using Mysql 10.1.29-MariaDB Database to create a new table.

What I am trying to do is to increment a number for each duplicate appearance of a value of company_name in another column. For example, for the table:

provided order_placed column of both table should be null

# req    +--------------------------------------------------+ |                        req                       | +--------------------------------------------------+ | req_id | order_placed | contact_id | seq_records | +--------+--------------+------------+-------------+ | 1      |         null |       1000 |        null | +--------+--------------+------------+-------------+ | 2      |         null |       1002 |        null | +--------+--------------+------------+-------------+ | 3      |         null |       1003 |        null | +--------+--------------+------------+-------------+    +--------------------------------------------------------------------+ |                               contact                              | +--------------------------------------------------------------------+ | contact_id | first_name | order_placed | company_name | company_id | +------------+------------+--------------+--------------+------------+ | 1000       | dirt       |         null |         Asus | 12         | +------------+------------+--------------+--------------+------------+ | 1002       | dammy      |         null |         Asus | 12         | +------------+------------+--------------+--------------+------------+ | 1003       | samii      |         null |         Asus | 12         | +------------+------------+--------------+--------------+------------+ | 1004       | xenon      |         null |       Lenova | 1          | +------------+------------+--------------+--------------+------------+   CREATE TABLE `req` (   `req_id` bigint(20) NOT NULL,    `order_placed` char(1) COLLATE utf8_bin DEFAULT NULL,    `contact_id` bigint(20) DEFAULT NULL,    `seq_records` bigint(2) DEFAULT NULL,   PRIMARY KEY (`req_id`),   KEY `contact_id` (`contact_id`),   CONSTRAINT `req_ibfk_10` FOREIGN KEY (`contact_id`) REFERENCES    `contact` (`contact_id`) )  /*!40101 SET character_set_client = @saved_cs_client */;  # contact  CREATE TABLE contact (   contact_id bigint(20) NOT NULL,   `first_name` varchar(100) COLLATE utf8_bin NOT NULL,   `company_name` varchar(100) COLLATE utf8_bin DEFAULT NULL,   `company_id` varchar(100) COLLATE utf8_bin DEFAULT NULL,   `order_placed` char(1) COLLATE utf8_bin DEFAULT NULL,   PRIMARY KEY (`contact_id`),   KEY `index_name` (`contact_id`), )   

query used

DELIMITER $  $   DROP procedure IF EXISTS `recordsequence` $  $   CREATE procedure `recordsequence` () BEGIN  declare companyname varchar(250); declare recordcount integer default 0; declare duplcount integer default 0; DECLARE vfinished INTEGER DEFAULT 0; declare icount int default 0; DEClARE records_cursor CURSOR FOR select c.company_name,count(c.company_name),r.opr_id from contact c, request r where c.contact_id=r.contact_id and r.order_placed is null  group by c.company_name; -- declare NOT FOUND handler DECLARE CONTINUE HANDLER FOR NOT FOUND SET vfinished = 1; OPEN records_cursor; transfer_records: LOOP FETCH records_cursor INTO companyname,duplcount; IF vfinished = 1 THEN LEAVE transfer_records; END IF;  begin set recordcount := duplcount; set icount := 1; DEClARE records_cursor1 CURSOR FOR select c.contact_id,c.company_name from contact c, request r where c.company_name = companyname and c.contact_id=r.contact_id and r.order_placed is null group by c.company_name; -- declare NOT FOUND handler DECLARE CONTINUE HANDLER FOR NOT FOUND SET vfinished = 1; OPEN records_cursor1; transfer_records1: LOOP FETCH records_cursor INTO contactid,companyname; IF vfinished = 1 THEN LEAVE transfer_records1; END IF;  begin  UPDATE contact set reorder_sequence = icount where contact_id = contactid; set icount := icount + 1; end;  END LOOP transfer_records1;  CLOSE records_cursor1;  if(recordcount == icount) THEN  select concat('company_name Updated successfully', companyname);  else select concat('company_name count mismatches please check', companyname); end if  end  END LOOP transfer_records;  CLOSE records_cursor;  End$  $   DELIMITER ; 

the above query is to create a procedure for the steps below

  1. To fetch records companyname and duplcount of the company names with the cursor.
  2. To fetch contact id of each company names and start a loop for a update statement.
  3. To update reorder_sequence table with values like the example given below

expected Result

Eg:  contact table  +--------------------------------------------------------+ |                         contact                        | +--------------------------------------------------------+ | order_placed | contact_id | company_name | seq_records | +--------------+------------+--------------+-------------+ | null         |       1002 |         Asus | 1           | +--------------+------------+--------------+-------------+ | null         |       1003 |         Asus | 2           | +--------------+------------+--------------+-------------+ | null         |       1005 |         Asus | 3           | +--------------+------------+--------------+-------------+ | null         |       1006 |       Lenova | 1           | +--------------+------------+--------------+-------------+  

Like the above example i have updated seq_records column with values according to the company_name column provided both order_placed column is null

error

A syntax error occurred with code 1064 near second select statement.