[ Psychology ] Open Question : I’m aroused by watching videos, images, reading and writing about unconscious women, having seizures or other medical problems. Is it wrong?

I’m a woman, I’m 20 and I’m bisexual. Ever since I was around 12, I’ve been aroused by videos of women being knocked unconcious, CPR, seizures and other medical issues. I’ve even started writing about celebrities that I have crushes on going through these problems. Recently, when I read about actress Emilia Clarke having an aneurysm, I wrote about my favorite actress, Gal Gadot having an aneurysm in the middle of a premiere. I also go on deviant art and search for images of unconscious and passed out women. I don’t ever want to hurt anyone or do anything without their consent, I just enjoy this on videos and literature. Is this weird?

Ubuntu does not start after the upgrade / error: failure reading sector 0x0 from `cd0′

A couple of days ago, I launched on my Ubuntu 16.04 LTS laptop

username@username~$  > sudo apt upgrade 

, then minimized the terminal and after a while turned off the laptop, forgetting to check whether the terminal completed the command. The next time I turned on the laptop, as always grub met me, I typed

grub> exit 

as always and chose the fourth option in Boot Manager (as always) (UEFI Onboard LAN IPv4 (50-9A-4C-B3-C9-0B)).
I expected that, as always, the Unity shell would meet me, I would enter my password and start working. But this did not happen, I got 3 errors, after which grub instantly appeared. I did not even have time to read what kind of errors. I recorded these errors on the camcorder and looked at the pause what errors they were and read them:

error: failure reading sector 0x0 from `cd0' error: failure reading sector 0x0 from `cd0' error: no such device: b3e461d6rced4-45a8-afe4-2094d34ae956 

That is the problem, I cannot start the system.

What I tried to do to solve the problem: I don’t understand this, but I saw a tip on the site to make a bootable USB flash drive: I made it with Ubuntu 16.04.6 (Downloaded from the official Ubuntu site). I started the laptop with it, chose Try Ubuntu without installing, but at the end a black screen was highlighted and nothing happened (I left the laptop for 30 minutes, but nothing changed).
After that, I turned off the laptop and selected Check disk for defects. The result showed Check finished: errors found in 7 files!.

The other options in Boot Manager is:
HDD1-ubuntu (TOSHIBA MQ01ABD100)
HDD2-ubuntu (TOSHIBA MQ01ABD100)
UEFI Onboard LAN IPv6 (50-9A-4C-B3-c(-0B)

First one returning me the same errors as fourth (which I tell about the beginning of this text)
Second one returning me the menu with 4 options:
Ubuntu
Advanced options for Ubuntu
System setup
Restore Ubuntu 16.04 to factory state

If I choose ‘Ubuntu’, then I have a blinking capslock indicator and about of 20 lines of text, in which the last one is

[   0.721315] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on onknown-block(0,0) 

and after this nothing happened.
Third one returning me the same errors as fourth (which I tell about the beginning of this text).

K3B burn verify fails; error reading sector; Resulting DVD either unreadable or corrupt. Please help

Please help with the following K3B burn problem. Recently I have been trying to write .iso images as a data project to DVD using K3b but the DVDs often end up corrupt or unreadable.

The burn session always comes out successful but the read verification seems to fail every other time I create a DVD. The error says something like:

“Error reading sector…” and whatever number etc.

When testing DVDs that fail the verification, it usually turns out that the last track on the disc can’t be copied to the desktop and results in an error message like:

“Error splicing file input/output” or something like that.

Unfortunately, I couldn’t find any answers online and I would like to avoid wasting time and DVDs every other time I burn a DVD. I don’t know how to stop these corrupt burns from happening in the future. Any help to resolve this issue would be appreciated. Thank you.

K3B burn verify fails; error reading sector; Resulting DVD either unreadable or corrupt. Please help

Please help with the following K3B burn problem. Recently I have been trying to write .iso images as a data project to DVD using K3b but the DVDs often end up corrupt or unreadable.

The burn session always comes out successful but the read verification seems to fail every other time I create a DVD. The error says something like:

“Error reading sector…” and whatever number etc.

When testing DVDs that fail the verification, it usually turns out that the last track on the disc can’t be copied to the desktop and results in an error message like:

“Error splicing file input/output” or something like that.

Unfortunately, I couldn’t find any answers online and I would like to avoid wasting time and DVDs every other time I burn a DVD. I don’t know how to stop these corrupt burns from happening in the future. Any help to resolve this issue would be appreciated. Thank you.

K3B burn verify fails; error reading sector; Resulting DVD either unreadable or corrupt. Please help

Please help with the following K3B burn problem. Recently I have been trying to write .iso images as a data project to DVD using K3b but the DVDs often end up corrupt or unreadable.

The burn session always comes out successful but the read verification seems to fail every other time I create a DVD. The error says something like:

“Error reading sector…” and whatever number etc.

When testing DVDs that fail the verification, it usually turns out that the last track on the disc can’t be copied to the desktop and results in an error message like:

“Error splicing file input/output” or something like that.

Unfortunately, I couldn’t find any answers online and I would like to avoid wasting time and DVDs every other time I burn a DVD. I don’t know how to stop these corrupt burns from happening in the future. Any help to resolve this issue would be appreciated. Thank you.

SharePoint Designer 2013 Workflow: Reading Check In comment, programmatic elevation of permissions, alternatives?

I’ve created a workflow for a review process, that copies a file into a “working” library, takes the check-in comment through Rest API (with elevated permissions and full appinv access) and then saves it as part of the comments history log (Among many other things, but this is the problem specifically).

This has been working great and dandy UNTIL, I’ve packaged it and need to deploy as template, I’ve noticed that the workflow ID changes and this means that each time a new site (Project site) is created, needs to send a mail/flag somehow someone from IT to elevate permissions in the appinv for that specific site collection (manual Admin appinv authorization).

I know how to do all of the above very well, with the exception that it’s not going to work for a massive deployment. With all of the above said this are the questions:

  • Can I get the Check In Comment through a SharePoint Designer 2010 or 2013 in any other way that does not mean using Rest API?
  • Is it possible to programmatically elevate permissions? I found this resource but I can’t seem to make anything out of that, it’s a powershell and should be run on a timer? http://parlaesolutions.com/blogs/Programmatically-Enable-Trust-for-Built-In-Workflow-App
  • Any other ideas?

My first suggestion was, could we create a new field and add the comment there followed by a no, it has to be the Check In Comment sadly.

Thank you!

An unexpected ‘PrimitiveValue’ node was found when reading from the JSON reader

I am getting this error while trying to post data to sharepoint list. It has one lookup field PostCategory.

{“readyState”:4,”responseText”:”{\”error\”:{\”code\”:\”-1, Microsoft.SharePoint.Client.InvalidClientQueryException\”,\”message\”:{\”lang\”:\”en-US\”,\”value\”:\”An unexpected ‘PrimitiveValue’ node was found when reading from the JSON reader. A ‘StartObject’ node was expected.\”}}}”,”responseJSON”:{“error”:{“code”:”-1, Microsoft.SharePoint.Client.InvalidClientQueryException”,”message”:{“lang”:”en-US”,”value”:”An unexpected ‘PrimitiveValue’ node was found when reading from the JSON reader. A ‘StartObject’ node was expected.”}}},”status”:400,”statusText”:”error”}

I have created another lookup field but there I was able to update by rest, but not able to update to the previous field.

enter image description here

PostCategory is the exisitng field and I have added Newcategory field. Both are lookup field, but there is some difference as there is no results array in NewCategory. I think may be this is the reason I am not able to post data to PostCategory field.

    $  .ajax({       url: _spPageContextInfo.webAbsoluteUrl + "/_api/web/lists/GetByTitle('Posts')/items(1)",       type: "POST",       headers: {           "accept": "application/json;odata=verbose",           "X-RequestDigest": $  ("#__REQUESTDIGEST").val(),           "content-Type": "application/json;odata=verbose",           "IF-MATCH": "*",           "X-HTTP-Method": "MERGE"       },       data: "{__metadata:{'type':'SP.Data.PostsListItem'},PostCategoryId: 4}",         success: function(data) {           console.log(data.d.results);       },       error: function(error) {           console.log(JSON.stringify(error));       }   });   

Reading the contents of a file in a Sharepoint group folder

I’m trying to read the contents of a file in a Sharepoint group folder using the Microsoft Graph Explorer, however the query I’m using doesn’t seem to be working:

https://graph.microsoft.com/v1.0/groups/<group_id>/drive/root/children/<filename>/$  value 

Is there a way to define a query that I can pull the file contents in Postman if Graph Explorer can’t do it?

Performance issue while reading data from hive using python

Please find below and help.

I have a table in hive with 351837(110 MB size) records and i am reading this table using python and writing into sql server.

In this process while reading data from hive into pandas dataframe it is taking long time. When i load entire records(351k) it takes 90 minutes.

To improve i went with following approach like reading 10k rows once from hive and writing into sql server. But reading 10k rows once from hive and assinging it to Dataframe is alone taking 4-5 minutes of time.

Please see below code.

def execute_hadoop_export():        """        This will run the steps required for a Hadoop Export.          Return Values is boolean for success fail        """        try:             hql='select * from db.table '            # Open Hive ODBC Connection            src_conn = pyodbc.connect("DSN=****",autocommit=True)            cursor=src_conn.cursor()            #tgt_conn = pyodbc.connect(target_connection)             # Using SQLAlchemy to dynamically generate query and leverage dataframe.to_sql to write to sql server...            sql_conn_url = urllib.quote_plus('DRIVER={ODBC Driver 13 for SQL Server};SERVER=Xyz;DATABASE=Db2;UID=ee;PWD=*****')            sql_conn_str = "mssql+pyodbc:///?odbc_connect={0}".format(sql_conn_url)            engine = sqlalchemy.create_engine(sql_conn_str)            # read source table.            vstart=datetime.datetime.now()            for df in pandas.read_sql(hql, src_conn,chunksize=10000):               # Remove Table Alias from Columns (happens by default in hive due to odbc settings (Use Native Query perhaps?))                vfinish=datetime.datetime.now()                df.rename(columns=lambda x: remove_table_alias(x), inplace=True)                            print 'Finished 10k rows reading from hive and it took', (vfinish-vstart).seconds/60.0,' minutes'            # Get connection string for target from Ctrl.Connnection                 df.to_sql(name='table', schema='dbo', con=engine, chunksize=10000, if_exists="append", index=False)                 print 'Finished 10k rows writing into sql server and it took', (datetime.datetime.now()-vfinish).seconds/60.0, ' minutes'                vstart=datetime.datetime.now()            cursor.Close()          except Exception, e:            print str(e) 

Please find below images about output. Result

Can you people kindly suggest me the fastest way to read hive table data in python.

Note: I have also tried with sqoop export option but my hive table is already in bucketting format.