If your class gives you a choice of skills to be trained in, can you choose a skill you’re already trained in to gain a free skill choice?

I know that if a class would make you trained in a skill you’re already trained in, you can select another skill to become trained in.

My question is about classes that let you pick between certain skills to be trained in, for example the Fighter lets you pick between Acrobatics and Athletics.

If I’m already trained in Athletics because of my background, can I choose Athletics as my Fighter skill, so that I gain another free skill choice, thus having 4+Int skill choices, instead of the usual 3+Int for a Fighter? Or would I be forced to choose Acrobatics?

In Pathfinder 2e, are ability modifier benefits to things like HP and trained skills retroactive?

In Pathfinder 2e, a character’s Constitution bonus affects HP total. A character’s Intelligence bonus grants training in additional skills and languages.

Are these benefits retroactive?

Examples:

  • If a character’s Constitution modifier increases from +2 to +3 as the result of an ability boost, does HP increase by 1 per character level? So a level 5 character going from +2 to +3 Con mod would gain 5 hit points for the ability score increase?
  • If a character’s Intelligence modifier increases from +1 to +2 as the result of an ability boost, does that character gain another trained skill and learned language?

How does the BERT model (in Tensorflow or Paddle-paddle frameworks) relate to nodes of the underlying neural-net that’s being trained?

The BERT model in frameworks like TensorFlow/Paddle-paddle shows various kinds of computation nodes (like subtract, accumulate, add, mult etc) in a graph like form in 12 layers.

But this graph doesn’t look anything like a neural-network, one that’s typically shown in textbooks (e.g. like this https://en.wikipedia.org/wiki/Artificial_neural_network#/media/File:Colored_neural_network.svg) where each edge has a weight that’s being trained and there is an input layer and output layer.

Instead, when I print out the BERT graph, I can’t figure out how a node in the BERT graph relates to a node in the neural-network that’s being trained.

I have been using the BERT framework models to compile them to a form where we can run the model on a PC/CPU. But I still lack this basic aspect of how BERT relates to neural-net as I don’t see which neural-network topology is being trained (as i’d expect topology/connections between/among various layers/nodes of the neural-net dictate how training of the neural net occurs).

Could someone explain what underlying neural-net is being trained by BERT? How do nodes in the BERT graph relate to neural-net nodes and weights on neural-net edges?

Why do some classes start with fewer trained skills?

Looking at Pre-Essentials 4th Edition classes, it seems like they map out like this:

Three trained skills:

  • Defenders: Battlemind, Fighter
  • Striker: Barbarian

Four trained skills (one max be fixed — e.g. required Religion for a divine class):

  • Defenders: Paladin, Swordmage, Warden
  • Strikers: Avenger, Monk, Sorcerer, Warlock
  • Leaders: Ardent, Cleric, Runepriest, Shaman, Warlord
  • Controllers: Druid, Invoker, Psion, Seeker, Wizard

Five trained skills (one is fixed):

  • Striker: Ranger
  • Leaders: Artificer, Bard

Six trained skills (two are fixed):

  • Striker: Rogue

I’m a bit mystified by the Battlemind, Fighter, and Barbarian getting only three trained skills. Is there a particular reason for this? Something about their other class features or the relative value of certain class skills over others?

(I’m more interested in how the skills play into class balance than “designer reasons.”)

minimise a function containing a Trained deep learning model

I want to minimize the function: Function to be optimized. A(x, m, ∆) is a function that adds the perturbation ∆ to the input image ‘x’ which belongs to a given data-set.

Here, l(yt, f(A(x, m, ∆))) is the loss function of a trained Deep neural network. Yt is the ground truth (one hot encoded) and f(A()) returns the logits.

λ*||m|| is λ times the L1 norm of m (a matrix). λ is a hyper-parameter.

How should I implement a script to find the optimum values of m, ∆ in python?

Is there a way to import a trained Neural Network to the 32 bit Version of Python 2.7? [migrated]

I have a Neural Network control mechanism which I implemented for a model written in Python 3.6. using tensorflow. Now I would like to use this Neural Network control mechanism in the real world, namely a OPC 32 bit Server.

One way to do this would be through a 32 bit Version of Python 2.7. Unfortunately I can only see 64 bit Versions of Keras and Tensorflow. I would be really grateful if anybody could tell me how to use a trained network on 32 bit Versions of Python 2.7. or any other way to connect Python 3.6. 64 bit to a OPC 32 bit Server.

how to test input image from pre trained CNN model

i have saved my CNN model in .pth file. Using Pytorch. I want to predict input image from my .pth file .I m getting this error. File “C:/Users/MS/PycharmProjects/GUI_last/test.py”, line 148, in print(‘Accuracy of the network on the test images: %d %%’ % ( 100 * (correct) /total)) ZeroDivisionError: division by zero

Process finished with exit code 1 and my code is here

    def testing(self):          model=torch.load("last_brain1.pth")         print(model)         criterion = nn.CrossEntropyLoss()         optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)         loss = 0.0         correct = 0         total = 0.0         itr = 0         model.eval()         trainloss = []         testloss = []         trainaccuracy = []         testaccuracy = []         itr = 0;         itrloss = 0         itr = 0          for images, labels in testloader:             images = Variable(images)             labels = Variable(labels)             # CUDA=torch.cuda.is_available()             # if CUDA:             # images=images.cuda()             # labels=labels.cuda()             outputs = model[images]             loss = criterion(outputs, labels)             loss += loss.item()             _, predicted = torch.max(outputs, 1)             total += labels.size(0)             correct += (predicted == labels).sum()             itr += 1         testloss.append(loss / itr)         testaccuracy.append((100 * correct / len(testset)))           print('training loss:%f %%' % (itrloss / itr))         print('training accuracy:%f %%' % (100 * correct / len(model)))         print('test loss:%f %%' % (loss / itr))         print('test accuracy:%f %%' % ((100 * correct / len(testset)))) loss = 0.0 correct = 0 total=0 itr = 0  #print('Accuracy of the network on the  test images: %d %%' % (   100 * (correct) /total)) class_correct = list(0 for i in range(3)) class_total = list(0 for i in range(3)) with torch.no_grad():     model=torch.load("last_brain1.pth")     for data in testloader:          images,labels = data         images = Variable(images)         labels = Variable(labels)         outputs = model[images]         _, predicted = torch.max(outputs, 1)         c = (predicted == labels).squeeze()         # class_total=[]         for i in range(labels.size(0)):             label = labels[i]             class_correct[label] += c[i].item()             class_total[label] += 1 for i in range(3):     print('Accuracy of %5s : %2f %%' % (classes[i], 100 * class_correct[i] / class_total[i])) 

Can an undead creature be nonmagically controlled or trained?

Is it possible for a typical fighter, for example,(or any other non magically inclined individual) to control a zombie or skeleton he found and train it to do what he wants, like with positive and negative reinforcement as with a dog or any other mundane way? and could that fighter (or any other non magically inclined individual) train a smarter undead creature like a ghoul this way?