Change the DC in an ADLDS application directory partition name (the Naming Context)

I have a server on which I used the Active Directory Lightweight Directory Services Setup Wizard to create a new application directory partition:

enter image description here

I need to rename the DC part of the Partition name from the screenshot above.

The Partition name is the Naming Context for all my ADLDS objects – when I connect to it using ADSI Edit, for example, the connection string is:


So every object in that Naming Context has CN=SomeValue,DC=Example as part of its distinguishedName.

Is there a way that I can simply update everything in this application directory partition to be as if I had originally put CN=SomeValue,DC=TheCorrectValue as the Partition name / Naming Context?

Note: The AD LDS instance / directory partition is not complex at all… the only thing I’ve done since creating it is add user objects via LDAP with C#. The reason I don’t just start over with a new application directory partition is that the user objects are currently being used to authenticate users to an application.

Name of antipattern where solving problem is recursively delegated 15 layers down?

What’s the antipattern where classes never do their jobs and instead delegate to the next layer, which delegates to the next layer… repeat 15 layers down, where a refactor can squash thousands of lines of code spread across 15 layers into one tiny class with a couple hundred?

– baker.Bake(potato);

– somethingComplicated.Bake(potato);
– somethingComplicated.Bake(potato) => otherThing.Bake(potato)
– otherThing.Bake(potato) => anotherThing.Bake(potato)
– … 13 layers later
– concreteBaker.Bake(potato)

Context: I’m cleaning up initialization logic that’s 15 layers of convoluted nesting (each layer consisting of god classes) to set up a singular object. There’s heavy sequential coupling between individual layers, and initialization sometimes continues in completely unobvious event callbacks.

Solution: Move code into one simplified bootstrapping class (most layers just delegate to another layer after doing minor work). Potentially bootstrapper plays sergeant and delegates to different layers one at a time, keeping stack depth small and easy to reason about.

How to name a range so that it doesn’t automatically change when rows are added in between

Is there a way to name a range so that it doesn’t automatically change when rows are added in between?

Ex: Range1 = 'Sheet1'!K5:K36 Add a row between rows 6&7 and now this automatically changes to Range1 = 'Sheet1'!K5:K37

When i go into the named range sidebar, i’ve tried to change the formula to say: Range1 = 'Sheet1'!$ K$ 5:$ K$ 36
but the $ go away immediately upon hitting ok

How to best name enterprise tools or portals?

If you have worked within a large organisation you would have most probably operated in highly siloed environments with a lot of legacy systems and third party software. I have worked in a number of big organisation as an information architect and I am struck by one practice in particular, namely how departments create names for their internal products, tools or portals of which there is of course plenty!

There is of course the excessive use of acronyms to describe software but there are also product names that fail at describing what any given piece of software does. As an example; a simple exercise intended to get access / permission to use a portal or a tool becomes extremely difficult without knowing exactly the name of the product or tool which is a big findability issue. So my question is :

  1. Is there any resource or guidance for naming enterprise tools or portals?

  2. What advice / guidance best practices would you suggest?

NameError: name ‘train_gen’ is not defined

I’m new to python and tensorflow. I’m now testing Improved WGAN code from After adjusting the code to python 3.6, it still gives “NameError: name ‘train_gen’ is not defined” when I ran it, although there wasn’t warning from pylint.

Can anyone help me with it?

The version of python I’m using is 3.6. There were many syntax differences from 2.7. I’ve already changed a lot to make it work. And I am running Tensorflow in a virtual environment. Still couldn’t figure out this one.

import os, sys sys.path.append(os.getcwd())  import time  import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt import numpy as np import sklearn.datasets import tensorflow as tf  import tflib as lib import tflib.ops.linear import tflib.ops.conv2d import tflib.ops.batchnorm import tflib.ops.deconv2d import tflib.save_images import tflib.mnist import tflib.plot  MODE = 'wgan-gp' # dcgan, wgan, or wgan-gp DIM = 64 # Model dimensionality BATCH_SIZE = 50 # Batch size CRITIC_ITERS = 5 # For WGAN and WGAN-GP, number of critic iters per gen iter LAMBDA = 10 # Gradient penalty lambda hyperparameter ITERS = 200000 # How many generator iterations to train for  OUTPUT_DIM = 784 # Number of pixels in MNIST (28*28)  lib.print_model_settings(locals().copy())  def LeakyReLU(x, alpha=0.2):     return tf.maximum(alpha*x, x)  def ReLULayer(name, n_in, n_out, inputs):     output = lib.ops.linear.Linear(         name+'.Linear',          n_in,          n_out,          inputs,         initialization='he'     )     return tf.nn.relu(output)  def LeakyReLULayer(name, n_in, n_out, inputs):     output = lib.ops.linear.Linear(         name+'.Linear',          n_in,          n_out,          inputs,         initialization='he'     )     return LeakyReLU(output)  def Generator(n_samples, noise=None):     if noise is None:         noise = tf.random_normal([n_samples, 128])      output = lib.ops.linear.Linear('Generator.Input', 128, 4*4*4*DIM, noise)     if MODE == 'wgan':         output = lib.ops.batchnorm.Batchnorm('Generator.BN1', [0], output)     output = tf.nn.relu(output)     output = tf.reshape(output, [-1, 4*DIM, 4, 4])      output = lib.ops.deconv2d.Deconv2D('Generator.2', 4*DIM, 2*DIM, 5, output)     if MODE == 'wgan':         output = lib.ops.batchnorm.Batchnorm('Generator.BN2', [0,2,3], output)     output = tf.nn.relu(output)      output = output[:,:,:7,:7]      output = lib.ops.deconv2d.Deconv2D('Generator.3', 2*DIM, DIM, 5, output)     if MODE == 'wgan':         output = lib.ops.batchnorm.Batchnorm('Generator.BN3', [0,2,3], output)     output = tf.nn.relu(output)      output = lib.ops.deconv2d.Deconv2D('Generator.5', DIM, 1, 5, output)     output = tf.nn.sigmoid(output)      return tf.reshape(output, [-1, OUTPUT_DIM])  def Discriminator(inputs):     output = tf.reshape(inputs, [-1, 1, 28, 28])      output = lib.ops.conv2d.Conv2D('Discriminator.1',1,DIM,5,output,stride=2)     output = LeakyReLU(output)      output = lib.ops.conv2d.Conv2D('Discriminator.2', DIM, 2*DIM, 5, output, stride=2)     if MODE == 'wgan':         output = lib.ops.batchnorm.Batchnorm('Discriminator.BN2', [0,2,3], output)     output = LeakyReLU(output)      output = lib.ops.conv2d.Conv2D('Discriminator.3', 2*DIM, 4*DIM, 5, output, stride=2)     if MODE == 'wgan':         output = lib.ops.batchnorm.Batchnorm('Discriminator.BN3', [0,2,3], output)     output = LeakyReLU(output)      output = tf.reshape(output, [-1, 4*4*4*DIM])     output = lib.ops.linear.Linear('Discriminator.Output', 4*4*4*DIM, 1, output)      return tf.reshape(output, [-1])  real_data = tf.placeholder(tf.float32, shape=[BATCH_SIZE, OUTPUT_DIM]) fake_data = Generator(BATCH_SIZE)  disc_real = Discriminator(real_data) disc_fake = Discriminator(fake_data)  gen_params = lib.params_with_name('Generator') disc_params = lib.params_with_name('Discriminator')  if MODE == 'wgan':     gen_cost = -tf.reduce_mean(disc_fake)     disc_cost = tf.reduce_mean(disc_fake) - tf.reduce_mean(disc_real)      gen_train_op = tf.train.RMSPropOptimizer(         learning_rate=5e-5     ).minimize(gen_cost, var_list=gen_params)     disc_train_op = tf.train.RMSPropOptimizer(         learning_rate=5e-5     ).minimize(disc_cost, var_list=disc_params)      clip_ops = []     for var in lib.params_with_name('Discriminator'):         clip_bounds = [-.01, .01]         clip_ops.append(             tf.assign(                 var,                  tf.clip_by_value(var, clip_bounds[0], clip_bounds[1])             )         )     clip_disc_weights =*clip_ops)  elif MODE == 'wgan-gp':     gen_cost = -tf.reduce_mean(disc_fake)     disc_cost = tf.reduce_mean(disc_fake) - tf.reduce_mean(disc_real)      alpha = tf.random_uniform(         shape=[BATCH_SIZE,1],          minval=0.,         maxval=1.     )     differences = fake_data - real_data     interpolates = real_data + (alpha*differences)     gradients = tf.gradients(Discriminator(interpolates), [interpolates])[0]     slopes = tf.sqrt(tf.reduce_sum(tf.square(gradients), reduction_indices=[1]))     gradient_penalty = tf.reduce_mean((slopes-1.)**2)     disc_cost += LAMBDA*gradient_penalty      gen_train_op = tf.train.AdamOptimizer(         learning_rate=1e-4,          beta1=0.5,         beta2=0.9     ).minimize(gen_cost, var_list=gen_params)     disc_train_op = tf.train.AdamOptimizer(         learning_rate=1e-4,          beta1=0.5,          beta2=0.9     ).minimize(disc_cost, var_list=disc_params)      clip_disc_weights = None  elif MODE == 'dcgan':     gen_cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(         disc_fake,          tf.ones_like(disc_fake)     ))      disc_cost =  tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(         disc_fake,          tf.zeros_like(disc_fake)     ))     disc_cost += tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(         disc_real,          tf.ones_like(disc_real)     ))     disc_cost /= 2.      gen_train_op = tf.train.AdamOptimizer(         learning_rate=2e-4,          beta1=0.5     ).minimize(gen_cost, var_list=gen_params)     disc_train_op = tf.train.AdamOptimizer(         learning_rate=2e-4,          beta1=0.5     ).minimize(disc_cost, var_list=disc_params)      clip_disc_weights = None  # For saving samples fixed_noise = tf.constant(np.random.normal(size=(128, 128)).astype('float32')) fixed_noise_samples = Generator(128, noise=fixed_noise) def generate_image(frame, true_dist):     samples =     lib.save_images.save_images(         samples.reshape((128, 28, 28)),          'samples_{}.png'.format(frame)     )  # Dataset iterator train_gen, dev_gen, test_gen = lib.mnist.load(BATCH_SIZE, BATCH_SIZE) def inf_train_gen():     while True:         for images, targets in train_gen():             yield images  # Train loop with tf.Session() as session:      gen = inf_train_gen()      for iteration in range(ITERS):         start_time = time.time()          if iteration > 0:             _ =          if MODE == 'dcgan':             disc_iters = 1         else:             disc_iters = CRITIC_ITERS         for i in range(disc_iters):             _data = gen.__next__()             _disc_cost, _ =                 [disc_cost, disc_train_op],                 feed_dict={real_data: _data}             )             if clip_disc_weights is not None:                 _ =          lib.plot.plot('train disc cost', _disc_cost)         lib.plot.plot('time', time.time() - start_time)          # Calculate dev loss and generate samples every 100 iters         if iteration % 100 == 99:             dev_disc_costs = []             for images,_ in dev_gen():                 _dev_disc_cost =                     disc_cost,                      feed_dict={real_data: images}                 )                 dev_disc_costs.append(_dev_disc_cost)             lib.plot.plot('dev disc cost', np.mean(dev_disc_costs))              generate_image(iteration, _data)          # Write logs every 100 iters         if (iteration < 5) or (iteration % 100 == 99):             lib.plot.flush()          lib.plot.tick() 

This is the section containing the error name.

# Dataset iterator  train_gen, dev_gen, test_gen = lib.mnist.load(BATCH_SIZE, BATCH_SIZE) def inf_train_gen():     while True:         for images, targets in train_gen():         yield images 

And here is the error.

Traceback (most recent call last):   File "<stdin>", line 13, in <module>   File "<stdin>", line 3, in inf_train_gen NameError: name 'train_gen' is not defined 

What could be the name of this rare type of HP Pavilion power connectors?

It is a HP Pavilion G6 laptop, and I am trying to identify its uncommon power plug. Looking around on the net, I see only the more standard, center pin power plugs, but not this. I think, maybe most HP pavilion G6 might have the common center-pin power socket, but this is an exception.

On this image, we can see in the middle a 5-pin power connector. Around a central pin, we can see 4 other pins in a symmetrical, rectangular layout.

What is its name?

enter image description here

VSC – how to set the name for new files

when using the built-in refactor tools (ctrl+.), I get the option ‘Move to new file’.

It works pretty well, but I can’t set the name of the new file. This is a problem because the default is CamelCase.js, and we are working with kebab-case.js

Is there any way to change the format? or set the file name?