ODE problem using DSolve

I would like to use DSolve (or NDSolve) to verify that the solution to the ODE problem

-4(v''[t]+(2/r)v'[t])-2*v[t]*Log[v[t]]-(3+(3/2)Log[4 Pi])*v[t]==0, 

with conditions $ \lim_{t\to \infty}v(t)=0$ and $ v'(0)=0$ is given by

v[t]=(4 Pi)^(-3/4)*Exp[-t^2/8]. 

I am able to verify this by hand, but am having trouble using Mathematica to verify it. I would like to use Mathematica to solve this differential equation, and later on modify some terms in the ODE to see how the solution changes.

Perhaps I am making a foolish mistake. I have also tried using NDSolve, but did not obtain the correct solution. I would appreciate any tips. Below you can find the picture of the error messages. Thanks for your help.

Picture of output

sol=DSolve[{-4(v''[t]+(2/r)v'[t])-2*v[t]*Log[v[t]] -(3+(3/2)Log[4 Pi])*v[t]==0,v[Infinity]==0,v'[0]==0},v[t],t] Plot[Evaluate[v[t] /. sol], {t, 0, 10}, PlotRange -> All] 

How to get count of an object, through 3 different tables in postgres with ID’s stored in each table

I’m currently using Postgres 9.6.16.

I am currently using 3 different tables to store a hypothetical users details.

The first table, called contact, this contains:

ID, Preferred_Contact_Method 

The second table, called orders, This contains:

ID, UserID, Contact_ID (the id of a row, in the contact table that relates to this order) 

The Third Table, Called order_details

ID, Orders_ID (the id in the orders table that relates to this order details) 

The tables contain other data as well, but for minimal reproduction, these are the columns that are relevant to this question.

I am trying to return some data so that i can generate a graph, in this hypothetical store, There’s only three ways we can contact a user: Email, SMS, or Physical Mail.

The graph is supposed to be 3 numbers, how many mails, emails, and SMS we’ve sent to the user; since in this hypothetical store whenever you purchase something you get notified of the successful shipment, these methods are 1:1 to the order_details, so if there’s 10 order_detail rows for the same user, then we sent 10 tracking numbers, and since there can be multiple order_details (each item has a different row in order_details) in an order, we can get the count by counting the total rows of order details belonging to a single user/contact, then attributing to what kind of contact method that user preferred at the time of making that order.

To represent this better: If a new user makes a new order, and orders 1 apple, 1 banana, and 1 orange. For the apple, the user set preferred tracking number delivery as SMS, for the banana, they set it to EMAIL, for the orange, they thought it would be funny to set the tracking number delivery via MAIL. Now, i want to generate a graph to this users preferred delivery method. So i’d like to query all those rows and obtain:

SMS, 1 EMAIL, 1 MAIL, 1 

Here’s a SQL Fiddle link with the schema and test data: http://sqlfiddle.com/#!17/eb8c0

the response with the above dataset should look like this:

method | count SMS,     4 EMAIL,   4 MAIL,    4 

Add custom post type as submenu

I have a custom post type called ‘Movies‘ that I want to add under WooCommerce menu after "Extensions" as you can see below:

enter image description here

The WooCommerce page URL is /wp-admin/admin.php?page=wc-admin

I tried:

register_post_type( 'movie', array(     'show_in_menu' => 'admin.php?page=wc-admin' ) ); 

but that didn’t work. I am able to set Movies under Tools, Settings, Post, etc everywhere else, just not inside WooCommerce. Any ideas why?

If player is not holding the ladder then it should not jump twice

I am using Unity2d and I am stuck with a problem. In my game I gave the player a double jump ability. I want to the player to jump twice only if they hold the ladder.

The problem is that if the player jumps through the ladder without holding the ladder, it jumps twice (as if the user presses jump key twice) I want the player to only double if player holding the ladder and then presses jump. If the player just jumps through the ladder it should not jump twice.

Below is the player script:

[SerializeField]  [SerializeField] float _xSpeed = 1f; [SerializeField] float _ySpeed = 1f; [SerializeField] float _jumpForce = 1f; [SerializeField] float _distance; [SerializeField] LayerMask _ladderLayer; private float _horizontalMovement; private float _verticalMovement; private Rigidbody2D _rb; private bool _isClimbing; private bool _isMovingHorizontal = true;    [SerializeField] Transform _groundPos; [SerializeField] float _checkRadius; [SerializeField] LayerMask _groundLayer; private bool _isGrounded;    // ExtraJump private int _extraJumps; [SerializeField] int _extraJumpValue = 1;   void Start() {     _rb = GetComponent<Rigidbody2D>();     _extraJumps = _extraJumpValue;  }   void Update() {     _horizontalMovement = Input.GetAxis("Horizontal");     _verticalMovement = Input.GetAxis("Vertical");        }   void FixedUpdate() {     if(_isMovingHorizontal)     {         _rb.velocity = new Vector2( _horizontalMovement * _xSpeed,_rb.velocity.y);     }            _isGrounded = Physics2D.OverlapCircle(_groundPos.position,_checkRadius,_groundLayer);       if(_isGrounded == true)     {         _extraJumps = _extraJumpValue;              }     if(Input.GetKeyDown(KeyCode.Space) && _extraJumps > 0)     {         _rb.velocity = new Vector2(_rb.velocity.x,_jumpForce);         _extraJumps --;              }     else if(Input.GetKeyDown(KeyCode.Space) && _extraJumps == 0 && _isGrounded == true)     {                  _rb.velocity = new Vector2(_rb.velocity.x,_jumpForce);     }       RaycastHit2D hitLadder = Physics2D.Raycast(transform.position,Vector2.up,_distance,_ladderLayer);      if(hitLadder.collider == true)      {         if(Input.GetKey(KeyCode.W))         {             _isClimbing = true;             _rb.gravityScale = 0;         }         else         if(Input.GetKey(KeyCode.Space))          {             _isClimbing = false;                              }            }                     if(_isClimbing == true && hitLadder.collider == true)     {                     _rb.velocity = new Vector2(_rb.velocity.x,_verticalMovement *          _ySpeed);              }     else     {         _rb.gravityScale = 1;     }       } 

How to create metrics of specific events in Google Analytics or Data Studio?

On my site, I am firing an event when the user lands on a specific URL. I’m firing another event when the user clicks on a button on that specific page.

When I go to Analytics, I can see the Total Events + Unique events for each dimension (specific page view, button click on specific page).

How is it possible to create a new metric (I assume a calculated metric) to display the Unique Events as metrics for any dimension I’d like to use in a Table?

Here is an example of what I have now:

Event Label Total Events Unique Events
specific_page_views 100 50
specific_page_clicks 20 10

…and what I’d like to achieve:

Source Specific Unique Page Views Specific Unique Page Clicks Specific Page CTR %
(direct) 30 8 26.67
google 15 2 13.34
bing 5 0 0

I have been searching for an answer for many days now and can’t seem to find the answer to this simple question. I will want to create multiple of these Metrics, not just 2.

Is this actually possible with Google Analytics or Google Data Studio? (I’m planning to use Google Data studio)

Redirect users from admin pages the optimal solution

On two different sites I use two somewhat different solutions. Are they different in terms of security and performance? Which is better? Are there even better ones?

Solution 1.

/** Redirect users from admin pages if not administrators **/ add_action( 'admin_init', function() {     if( ! current_user_can( 'manage_options' ) && ( ! wp_doing_ajax() ) ) {         wp_safe_redirect( site_url() );         exit;     } } ); 

Solution 2.

/** Redirect users from admin pages if not administrators **/ add_action( 'admin_init', function() {     if( ! in_array( 'administrator', wp_get_current_user()->roles ) ) {         wp_redirect( get_bloginfo( 'wpurl' ) );         exit;     } } ); 

How to contour plot a quantized function?

I am trying to plot a function over a 2 dimensional region, which takes integer and half integer values.

However, due to numerical approximations and errors, the calculated value of the function sometimes becomes 0.99 or 1.01 instead of exactly 1. When I make a contour plot, it gives a certain color between 0 and 1, another color between 1 and 2, and so on. As a result, 0.99 and 1.01 acquire different colors (while I want both of them to be the same color, because they represent 1).

What would be an efficient way to plot different integers (approximately, upto numerical errors) and half integers with different colors in a contour (or a similar) plot?

Also, the function takes values between -2 and 2, so I don’t need to take care of all integers.

I cannot use floor function because that will send both 0.99 (should be 1) and 0.01 (should be 0) to 0.

Understanding postgres query planner behaviour on gin index

need your expert opinion on index usage and query planner behaviour.

\d orders                                          Partitioned table "public.orders"            Column            |           Type           | Collation | Nullable |                   Default -----------------------------+--------------------------+-----------+----------+----------------------------------------------  oid                         | character varying        |           | not null |  user_id                     | character varying        |           | not null |  tags                        | text[]                   |           | not null |  category                    | character varying        |           |          |  description                 | character varying        |           |          |  order_timestamp             | timestamp with time zone |           | not null |  ..... Partition key: RANGE (order_timestamp) Indexes:     "orders_uid_country_ot_idx" btree (user_id, country, order_timestamp)     "orders_uid_country_cat_ot_idx" btree (user_id, country, category, order_timestamp desc)     "orders_uid_country_tag_gin_idx" gin (user_id, country, tags) WITH (fastupdate=off)     "orders_uid_oid_ot_key" UNIQUE CONSTRAINT, btree (user_id, oid, order_timestamp) 

I have observed the following behaviour based on query param when I run the following query, select * from orders where user_id = 'u1' and country = 'c1' and tags && '{t1}' and order_timestamp >= '2021-01-01 00:00:00+00' and order_timestamp < '2021-03-25 05:45:47+00' order by order_timestamp desc limit 10 offset 0

case 1: for records with t1 tags where t1 tags occupies 99% of the records for user u1, 1st index orders_uid_country_ot_idx is picked up.

Limit  (cost=0.70..88.97 rows=21 width=712) (actual time=1.967..12.608 rows=21 loops=1)    ->  Index Scan Backward using orders_y2021_jan_to_uid_country_ot_idx on orders_y2021_jan_to_jun orders  (cost=0.70..1232.35 rows=293 width=712) (actual time=1.966..12.604 rows=21 loops=1)          Index Cond: (((user_id)::text = 'u1'::text) AND ((country)::text = 'c1'::text) AND (order_timestamp >= '2021-01-01 00:00:00+00'::timestamp with time zone) AND (order_timestamp < '2021-03-25 05:45:47+00'::timestamp with time zone))          Filter: (tags && '{t1}'::text[])  Planning Time: 0.194 ms  Execution Time: 12.628 ms 

case 2: But when I query for tags value t2 with something like tags && '{t2}' and it is present in 0 to <3% of records for a user, gin index is picked up.

Limit  (cost=108.36..108.38 rows=7 width=712) (actual time=37.822..37.824 rows=0 loops=1)    ->  Sort  (cost=108.36..108.38 rows=7 width=712) (actual time=37.820..37.821 rows=0 loops=1)          Sort Key: orders.order_timestamp DESC          Sort Method: quicksort  Memory: 25kB          ->  Bitmap Heap Scan on orders_y2021_jan_to_jun orders  (cost=76.10..108.26 rows=7 width=712) (actual time=37.815..37.816 rows=0 loops=1)                Recheck Cond: (((user_id)::text = 'u1'::text) AND ((country)::text = 'ID'::text) AND (tags && '{t2}'::text[]))                Filter: ((order_timestamp >= '2021-01-01 00:00:00+00'::timestamp with time zone) AND (order_timestamp < '2021-03-25 05:45:47+00'::timestamp with time zone))                ->  Bitmap Index Scan on orders_y2021_jan_to_uid_country_tag_gin_idx  (cost=0.00..76.10 rows=8 width=0) (actual time=37.812..37.812 rows=0 loops=1)                      Index Cond: (((user_id)::text = 'u1'::text) AND ((country)::text = 'c1'::text) AND (tags && '{t2}'::text[]))  Planning Time: 0.190 ms  Execution Time: 37.935 ms 
  1. Is this because the query planner identifies that since 99% of the records is covered in case 1, it skips the gin index and directly uses the 1st index? If so, does postgres identifies it based on the stats?

  2. Before gin index creation, when 1st index is picked for case 2, performance was very bad since index access range is high. i.e number of records that satisfies the condition of user id, country and time column is very high. gin index improved it but i’m curious to understand how postgres chooses it selectively.

  3. orders_uid_country_cat_ot_idx was added to support filter by category since when gin index was used when filtered by just category or by both category and tags, the performance was bad compared to when the btree index of user_id, country, category, order_timestamp is picked up . I expected gin index to work well for all the combination of category and tags filter. What could be the reason? The table contains millions of rows