Improving myself and getting rid of my bad player habits

Title really says it all. For a period of time (one or two years? Hard to tell) I became oddly ill. Incredibly sleepy, irritable, forgetful and it was all I could do to participate in the game. The GM thought I was becoming apathetic to his world, the other players just assumed my character was being distant, and I struggled to roleplay and deal with some mechanical problems with the character and the system (lots of homebrew) as a whole. However, with medication, I slowly started coming around. This came at a cost, however.

My irritability became explosive. I said things that I regret to this day, and ruined some friendships as a result. Its been… almost 9 months since the last straw was broken and the long running game had to be put on hiatus. We had thought to try and talk things out, see if we couldn’t cement some positive habits in place of negative habits through a series of modules, each run by a different player, and then being run by a GM. Unfortunately, my schedule went haywire and this couldn’t happen. I decided to back out of the group until I could fix myself and so they could continue playing in other games.

Lately, I’ve noticed myself falling into bad habits. Lots of negativity toward certain mechanics or just some darker thoughts. Nothing to talk with a psychiatrist about, but more things that I’m trying to fix about myself and get rid of.

I’m wondering if anyone knows of a good way to help change the way a player plays without them being in a game? My schedule doesn’t allow me to play locally, and I don’t want to subject a gm to me if I relapse.

EDIT: Sorry if the tags weren’t correct. This is a difficult question for me to figure out.

What are the source and scopes for Improving Website ranking and traffic.

I have plenty of website of my own. But all have become stagnant.

Can you help me for improve its ranking and generate more traffic to the website? The website is of Insurance related.

Website: –

Targeted region: – USA

Website about: – Car/Auto Insurance.

I am targeting to the keywords which are having every low competition and high search volume.

Currently I am ranking one for some of the high search volume Keywords for…

What are the source and scopes for Improving Website ranking and traffic.

Do 10 Pbn Posts Dofollow Backlinks To Website Improving Rank for $30

We provide High authority MANUAL High TF CF DA PA HomePage PBN BackIinks on worpress platform high-quality standards. Now with recent changes in the Google Algorithm, you need QUALITY BACKLINKS. And by quality backlinks, its meant: Contextual Backlinks from high DA(Domain Authority) and Page Authority (PA) expired websites (which are called gems in SEO industry) Key Features of our Service: WordPress Hosted Domains Home Page Backlinks on DA/PA Sites TF 10 to 40+ DA 10 to 40+ PA 15 to 40+ Avg TF CF DA PA would be 25+ 100% Manual Post All are Contextual Do Follow links Links from Aged powerful domains 24/7 Customer support We have a large blog network, in which all blogs have page authority 27 Plus. This service IS EXCLUSIVELY FOR QUALITY LOVERS who want natural links with relevant content on HIGH AUTHORITY sites. Such high metrics l!inks will definitely boost your SERP.

by: divyaMehta9512
Created: —
Category: PBNs
Viewed: 113

Improving speed of Binomial and Multinomial Random Draws

A number of users have discussed the speed of Random number generation in Mathematica.

The Binomial and Multinomial random number generators in Mathematica are fast if multiple draws are needed for the same distribution parameters. For example, generating 500000 draws from the Binomial distribution is very quick

In[30]:= AbsoluteTiming[  RandomVariate[BinomialDistribution[100, 0.6], 500000];]  Out[30]= {0.017365, Null} 

However, the speed is slow compared to that in R and Julia when the parameters change across draws, as may be required when performing certain Monte Carlo simulations.

For example, if we have a vector nvec that contains the number of trials for each draw and a vector pvec that contains the corresponding probabilities of success.

nvec = RandomInteger[{5, 125}, 500000];  pvec = RandomReal[{0, 1}, 500000]; 

Then we have

In[28]:= AbsoluteTiming[  Mean[Table[     RandomVariate[BinomialDistribution[nvec[[i]], pvec[[i]]]], {i, 1,       Length@nvec}]] // N  ]  Out[28]= {36.2144, 32.5283} 

This hit in speed most probably stems from how these are implemented internally in Mathematica.

Are there alternate methods that are fast for the case when the parameter distribution changes across the draws?

Genetic algorithm not improving over generations

I have written a very simple genetic algorithm but I don’t know if I have got the logic right.

I’m taking the members with the highest fitness, killing off the worst 50%, from there I crossover at random from the remaining pool of members.

For crossover, I simply take a midpoint which will act as a pivot in the list of connections / weights.

I then add connections from one parent up to the midpoint, then the remaining from the other parent.

For mutation, each new member is mutated immediately after creation and a random weight is chosen and mutated – this means adding a random value between -1 and 1 to the chosen weight.

There is some improvement but as time goes on, the members just start getting worse.

Is there anything I have done wrong?

Here’s selecting random members to cross over:

while (NewMembers.Count != populationSize)     {         int n = Random.Range(0, populationSize / 2);         int m = Random.Range(0, populationSize / 2);          if (n != m)         {             NewMembers.Add(crossover(Members[n], Members[m]));         }     } 


 int midpoint = Random.Range(0, child.GetComponent<Member>().connections.Count);          for (int i = 0; i < midpoint; i++)         {             child.GetComponent<Member>().connections[i].setWeight(m1.GetComponent<Member>().connections[i].getWeight());         }          for (int i = midpoint; i < child.GetComponent<Member>().connections.Count; i++)         {             child.GetComponent<Member>().connections[i].setWeight(m2.GetComponent<Member>().connections[i].getWeight());         }          int randomConnMutation = Random.Range(0, child.GetComponent<Member>().connections.Count);          child.GetComponent<Member>().connections[randomConnMutation].setWeight(child.GetComponent<Member>().connections[randomConnMutation].getWeight() + Random.Range(-1f, 1f));          return child; 

I am using Unity but that shouldn’t affect the logic of the genetic algorithm, thus is irrelevant but figured I say anyway.

Help would much be appreciated.

P.S. before anyone asks “what’s my learning rate” or whatever, I don’t know what that is and it is not used.

Improving UX for multi-selectable cards

I’m designing a screen where the user should be able to:
1. Select b/w multiple options (displayed in cards)
2. Search for other cards and be able to select those as well.
Here’s a picture for better explanation:

Option to search or select the various options Option to search or select the various options

User selects option 2 from recent options User selects option 2 from recent options

User now searches for another option User now searches for another option and selects it.

His selections are only displayed when he clears his search His selections are only displayed when he clears his search

I am confused about the user experience part of such an interface.
1. Is this an intuitive UI?
2. There isn’t much real estate for me to play with and displaying the same cards in both the search results and separately as a selected option does not seem possible. Are there examples of other sites which do this?
3. What should the ideal multi card search be like

Shape X2 Keto another very important recommendation, focus on improving your nutrition gradually

Shape X2 Keto nutrition and weight management strategies are combined with a sound exercise program. As a quick recommendation which I sincerely hope you follow, steer very clear of the quick weight loss diets. They are a recipe for disaster. As a way to do this, it’s important that you think of your weight loss as a process that will continue for as long as you are alive. Too many people want quick results, but fail to consider the long-term. That will surely lead to frustration, discouragement, and eventually, failure. As

PHP Laravel – Improving and refactoring code to Reduce Queries

Improve Request to Reduce Queries

I have a web application, where users can upload Documents or Emails, to what I call a Strema. The users can then define document fields email fields to the stream, that each document/email will inherit. The users can then furthermore apply parsing rules to these fields, that each document/email will be parsed after.

Now let’s take the example, that an user uploads a new document. (I have hardcoded the ID’s for simplicty).

$  stream = Stream::find(1); $  document = Document::find(2);  $  parsing = new ApplyParsingRules; $  document->storeContent($  parsing->parse($  stream, $  document)); 

Below is the function that parses the document according to the parsing rules:

    public function parse(Stream $  stream, DataTypeInterface $  data) : array     {         //Get the rules.         $  rules = $  data->rules();          $  result = [];         foreach ($  rules as $  rule) {              $  result[] = [                 'field_rule_id' => $  rule->id,                 'content' => 'something something',                 'typeable_id' => $  data->id,             ];         }          return $  result;     } 

So above basically just returns an array of the parsed text.

Now as you probably can see, I use an interface $ DataTypeInterface. This is because the parse function can accept both Documents and Emails.

To get the rules, I use this code:

//Get the rules. $  rules = $  data->rules(); 

The method looks like this:

class Document extends Model implements DataTypeInterface {     public function stream()     {         return $  this->belongsTo(Stream::class);     }     public function rules() : object     {         return FieldRule::where([             ['stream_id', '=', $  this->stream->id],             ['fieldable_type', '=', 'App\DocumentField'],         ])->get();     } } 

This will query the database, for all the rules that is associated with Document Fields and the fields, that is associated with the specific Stream.

Last, in my first request, I had this:

$  document->storeContent($  parsing->parse($  stream, $  document)); 

The storeContent method looks like this:

class Document extends Model implements DataTypeInterface {     // A document will have many field rule results.     public function results()     {         return $  this->morphMany(FieldRuleResult::class, 'typeable');     }     // Persist the parsed content to the database.     public function storeContent(array $  parsed) : object     {         foreach ($  parsed as $  parse) {             $  this->results()->updateOrCreate(                 [                     'field_rule_id' => $  parse['field_rule_id'],                     'typeable_id' => $  parse['typeable_id'],                 ],                 $  parse             );         }         return $  this;     } } 

As you can probably imagine, everytime a document gets parsed, it will create be parsed by some specific rules. These rules will all generate a result, thus I am saving each result in the database, using the storeContent method.

However, this will also generate a query for each result.

One thing to note: I am using the updateOrCreate method to store the field results, because I only want to persist new results to the database. All results where the content was just updated, I want to update the existing row in the database.

For reference, above request generates below 8 queries:

select * from `streams` where `streams`.`id` = ? limit 1 select * from `documents` where `documents`.`id` = ? limit 1 select * from `streams` where `streams`.`id` = ? limit 1     select * from `field_rules` where (`stream_id` = ? and `fieldable_type` = ?) select * from `field_rule_results` where `field_rule_results`.`typeable_id` = ? and... select * from `field_rule_results` where `field_rule_results`.`typeable_id` = ? and...   insert into `field_rule_results` (`field_rule_id`, `typeable_id`, `typeable_type`, `content`, `updated_at`, `created_at`) values (..) insert into `field_rule_results` (`field_rule_id`, `typeable_id`, `typeable_type`, `content`, `updated_at`, `created_at`) values (..) 

Above works fine – but seems a bit heavy, and I can imagine once my users starts to generate a lot of rules/results, this will be a problem.

Is there any way that I can optimize/refactor above setup?

Improving the speed of creation for three Perlin Noise Maps in Python?

I am interested in learning how I can improve the speed of the code in this pygame file. I iterate over 6400 * 1800 * 3 elements of various numpy arrays here to apply noise values to them. The noise library I’m using can be found on GitHub here. All others are self-explanatory noise-maps.

I am calling static variables from a class called ST here. ST.MAP_WIDTH = 6400 and ST.MAP_HEIGHT = 1800.

from __future__ import division from singleton import ST import numpy as np import noise import timeit import random import math   def __noise(noise_x, noise_y, octaves=1, persistence=0.5, lacunarity=2.0):     """     Generates and returns a noise value.      :param noise_x: The noise value of x     :param noise_y: The noise value of y     :return: numpy.float32     """      value = noise.pnoise2(noise_x, noise_y,                           octaves, persistence, lacunarity,                           random.randint(1, 9999))      return np.float32(value)   def __elevation_mapper(noise_x, noise_y):     """     Finds and returns the elevation noise for the given noise_x and     noise_y parameters.      :param noise_x: noise_x = x / ST.MAP_WIDTH - randomizer     :param noise_y: noise_y = y / ST.MAP_HEIGHT - randomizer     :return: float     """     return __noise(noise_x, noise_y,  8, 0.9)   def __climate_mapper(y, noise_x, noise_y):     """     Finds and returns the climate noise for the given noise_x and     noise_y parameters.      :param noise_x: noise_x = x / ST.MAP_WIDTH - randomizer     :param noise_y: noise_y = y / ST.MAP_HEIGHT - randomizer     :return: float     """     # find distance from bottom of map and normalize to range [0, 1]     distance = math.sqrt((y - (ST.MAP_HEIGHT >> 1))**2) / ST.MAP_HEIGHT      value = __noise(noise_x, noise_y,  8, 0.7)      return (1 + value - distance) / 2   def __rainfall_mapper(noise_x, noise_y):     """     Finds and returns the rainfall noise for the given noise_x and     noise_y parameters.      :param noise_x: noise_x = x / ST.MAP_WIDTH - randomizer     :param noise_y: noise_y = y / ST.MAP_HEIGHT - randomizer     :return: float     """     return __noise(noise_x, noise_y,  4, 0.65, 2.5)   def create_map_arr():     """     This function creates the elevation, climate, and rainfall noise maps,     normalizes them to the range [0, 1], and then assigns them to their     appropriate attributes in the singleton ST.     """      start = timeit.default_timer()      elevation_arr = np.zeros([ST.MAP_HEIGHT, ST.MAP_WIDTH], np.float32)     climate_arr = np.zeros([ST.MAP_HEIGHT, ST.MAP_WIDTH], np.float32)     rainfall_arr = np.zeros([ST.MAP_HEIGHT, ST.MAP_WIDTH], np.float32)      randomizer = random.uniform(0.0001, 0.9999)      # assign noise map values     for y in range(ST.MAP_HEIGHT):         for x in range(ST.MAP_WIDTH):             noise_x = x / ST.MAP_WIDTH - randomizer             noise_y = y / ST.MAP_HEIGHT - randomizer              elevation_arr[y][x] = __elevation_mapper(noise_x, noise_y)             climate_arr[y][x] = __climate_mapper(y, noise_x, noise_y)             rainfall_arr[y][x] = __rainfall_mapper(noise_x, noise_y)      # normalize to range [0, 1] and assign to relevant ST attributes     ST.ELEVATIONS = (elevation_arr - elevation_arr.min()) / \                     (elevation_arr.max() - elevation_arr.min())      ST.CLIMATES = (climate_arr - climate_arr.min()) / \                   (climate_arr.max() - climate_arr.min())      ST.RAINFALLS = (rainfall_arr - rainfall_arr.min()) / \                    (rainfall_arr.max() - rainfall_arr.min())      stop = timeit.default_timer()     print("GENERATION TIME: " + str(stop - start))