How to remove the SIM Toolkit from my phone? (Without root or flashing any ROM)

This application is sending a lot of spam to my phone recently, two or three times a day. I deactivated broadcast messages, tried converting USSD pop-ups into notifications to avoid being interrupted (didn’t work). Basically, everything I tried didn’t work, so now I want to remove it from my phone. Is it possible to do it without root, with ADB or something? I’m not a power user but I have used ADB to give permissions to some apps before.

Open encrypted PDF File without knowing the password

I wrote a program to encrypt a PDF File if the password is not known as an excercise.

The resitrictions to unblock the password is, that a single english word (All capital or lower) was used to encrypt the PDF File.

The Program uses a dictionary file to unlock the file. It looks like this:

dictionary.txt

AARHUS AARON ABABA ABACK ABAFT ABANDON ABANDONED ABANDONING ABANDONMENT ABANDONS ABASE ABASED ABASEMENT ABASEMENTS ABASES ABASH ABASHED ABASHES   ZEROTH ZEST ZEUS ZIEGFELD ZIEGFELDS ZIEGLER ZIGGY ZIGZAG ZILLIONS ZIMMERMAN ZINC ZION ZIONISM ZIONIST ZIONISTS ZIONS ZODIAC ZOE ZOMBA ZONAL ZONALLY ZONE ZONED ZONES ZONING ZOO ZOOLOGICAL ZOOLOGICALLY ZOOM ZOOMS ZOOS ZORN ZOROASTER ZOROASTRIAN ZULU ZULUS ZURICH 

The real file contains over 45000 words.It can be found here if you want to try out.

pdf_password_breaker

""" Brute force password breaker using a dictionary containing English words. """  import sys import PyPDF2 from pathlib import Path  def get_filename_from_user() -> Path:     """Asks for a path from the User"""     while True:         filename: str = input("Enter filename in folder of script:")         path: Path = Path(sys.path[0], filename)          if path.is_file():             return path.as_posix()         print("File doesn't exist\n")   def decrypt(pdf_filename: Path, password: str) -> bool:     """     Try to decrypt a file. If not successful a false is returned.     If the file passed is not encrypted also a false is passed     """     with open(pdf_filename, 'rb') as pdf_file:         pdf_reader = PyPDF2.PdfFileReader(pdf_file)         pdf_reader.decrypt(password)         pdf_writer = PyPDF2.PdfFileWriter()          try:             for page_number in range(pdf_reader.numPages):                 pdf_writer.addPage(pdf_reader.getPage(page_number))         except PyPDF2.utils.PdfReadError:             return False          new_name: str = pdf_filename.stem + "_decrypted.pdf"         filename_decrypted = pdf_filename.parent / new_name          with open(filename_decrypted, 'wb') as pdf_file_decrypted:             pdf_writer.write(pdf_file_decrypted)     return True   def break_encryption(pdf_filename: Path, dictionary_filename: str) -> bool:     """Try's out words from a dictionary to break encryption"""     with open(dictionary_filename, 'r') as dictionary_file:         keyword: str = dictionary_file.readline().strip()          if decrypt(pdf_filename, keyword):             return keyword         if decrypt(pdf_filename, keyword.lower()):             return keyword.lower()          while keyword:             keyword = dictionary_file.readline().strip()              if decrypt(pdf_filename, keyword):                 return keyword             if decrypt(pdf_filename, keyword.lower()):                 return keyword.lower()     return None   def pdf_password_breaker():     """main loop"""     filename: Path = get_filename_from_user()     password: str = break_encryption(filename, "dictionary.txt")      if password:         print("File unlocked. Password was:" + password)         return     print("File could not be unlocked")  if __name__ == "__main__":     pdf_password_breaker() 

I tried a file with a simple password like “hello”. It works but it takes alot of time until it reaches to the hello in the file.

I wonder if theres a way to improve the speed.

Also i wonder what can be in general improved in the code please let me know.

Restore site without back up

First time posting In a forum so excuse me if I ramble on. Running sp2010

I run two different app pools on the same SP farm. When I’ve made changes to one site, I replicate them on the other using powershell commands “backup_spsite -identity “mysite1” -path c:\filename

Restore-spsite -identity “mysite2” -path c:\filename -dbname “mysite2DB” -FORCE

Before I did this I normally take a back up of the site that I’m writing over. I forgot this time and I have some users complaining they have lost work (despite being told which site to use)

Is there any way I can recover the site i wrote over?

Thanks in advance

Is there a printing lab that post prints direct to customer without the invoice?

Is there an online printing service, (like photobox) that I can use to print and ship prints directly to customers so that I don’t even handle them. If I use a normal online printing service, they will send an invoice. Since I will be charging the customers more than I pay for the prints, is there a service that doesn’t send the invoice?

Bonus points if they send the prints with my branding instead of theirs.

Bonus bonus points if they have labs in multiple major countries (USA, UK etc). Otherwise, just listing a companies in each country is fine

This isn’t for printing wedding photos or anything important like that, just general quality, mass market snaps.

Convert IEnumerable to html table string without using Json.NET and Datatable

I expect to convert IEnumerable to html table string without using Json.NET and Datatable.

Here’s the code I wrote , it can generate html table strings well,but it depends on Json.NET.

void Main() {     var datas = Enumerable.Range(1,2);     var array = datas.ToArray().ToHtmlTable(); //Run Success     var set = datas.ToHashSet().ToHtmlTable();//Run Succes     var list = datas.ToList().ToHtmlTable();//Run Succes     var enums = datas.AsEnumerable().ToHtmlTable();//Run Succes }  public static class HTMLTableHelper {     public static string ToHtmlTable(this IEnumerable enums)     {         return ToHtmlTableConverter(enums);     }      public static string ToHtmlTable(this System.Data.DataTable dataTable)     {         return ConvertDataTableToHTML(dataTable);     }      private static string ToHtmlTableConverter(object enums)     {         var jsonStr = JsonConvert.SerializeObject(enums);         var data = JsonConvert.DeserializeObject<System.Data.DataTable>(jsonStr);         var html = ConvertDataTableToHTML(data);         return html;     }      private static string ConvertDataTableToHTML(System.Data.DataTable dt)     {         var html = new StringBuilder("<table>");          //Header         html.Append("<thead><tr>");         for (int i = 0; i < dt.Columns.Count; i++)             html.Append("<th>" + dt.Columns[i].ColumnName + "</th>");         html.Append("</tr></thead>");          //Body         html.Append("<tbody>");         for (int i = 0; i < dt.Rows.Count; i++)         {             html.Append("<tr>");             for (int j = 0; j < dt.Columns.Count; j++)                 html.Append("<td>" + dt.Rows[i][j].ToString() + "</td>");             html.Append("</tr>");         }         html.Append("</tbody>");         html.Append("</table>");         return html.ToString();     } } 

Thanks.

Reducing a graph without changing its chromatic number

Does reducing a graph (removing or replacing vertices or edges) without changing its chromatic number has a specific name?

Take this cactus graph as an example:

a cactus graph

The edges with vertex of degree 1 could be removed without affecting the vertex chromatic number. I think something similar should be possible with cycles e.g. removing the two vertices in the bottom of the cactus should not affect its chromatic number.

Are there polynomial algorithms that do that? I would prefer not to reinvent the wheel.

My goal is to simplify graphs before feeding them into other algorithms.

I will also appreciate references to relevant literature. Thank you!

Finding the longest word without these characters follow-up

Here I asked this question:

My goal is to go through the list of all English words (separated by '\n' characters) and find the longest word which doesn’t have any of these characters: “gkmqvwxz”. And I want to optimize it as much as possible

I updated the code with the help of suggestions from answers, but I still need comments on this updated version

Changes:

  1. Name of the file and the forbidden characters are no longer hard-coded. They are passed by arguments.
  2. Added several error checks.
  3. Used pointers instead of indexes.
  4. buffer is freed when we’re done with it.
  5. Used bool instead of int for the return type of is_legal.
  6. Parameters to is_legal are made const since we don’t change them.
  7. Skip next lines ('\n') remaining from previous lines.
  8. Added some functions to keep main simple.
  9. Removed superfluous headers (#include <string.h>, #include <stddef.h>, #include <unistd.h>).
  10. is_legal need not know about the entire buffer. Just the relevant pointers are now sent.
  11. length is no longer fixed. We get the size of the array at runtime
  12. buffer is terminated with null.

Updated code:

#include <ctype.h> #include <stdlib.h> #include <stdio.h> #include <stdbool.h>   static inline bool is_legal(const char* beg, size_t size, const char* bad) {     for (; size-- !=0 ; ++beg) {                                 /* go through current word */         char ch = tolower(*beg);                                /* The char might be upper case */         for (const char* bad_ptr = bad; *bad_ptr; ++bad_ptr)             if (ch == *bad_ptr)                                 /* If it is found, return false */                 return false;     }      return true;                                                /* else return true */ }  static inline size_t get_next_word_size(const char* beg) {     size_t size = 0; /* resulting size */     for (; beg[size] && beg[size] != '\n'; ++size) /* read the next word */     { } /* for loop doesn't have a body */     return size; }  static inline char* get_buffer(const char* filename) {     char *buffer = NULL;                     /* contents of the text file */      size_t length;                 /* maximum size */     FILE* fp;     fp = fopen(filename, "rb");      if (!fp) {                               /* checking if file is properly opened */         perror("Couldn't open the file\n");         return NULL;     }      if (fseek(fp, 0, SEEK_END)) {         perror("Failed reading");         return NULL;     }      length = ftell(fp);      if (fseek(fp, 0, SEEK_SET)) {         perror("Failed reading");         return NULL;     }      buffer = malloc(length + 1); /* +1 for null terminator */      if (buffer == NULL) {                   /* checking if memory is allocated properly */         perror("Failed to allocate memory\n");         free(buffer);         return NULL;     }      fread(buffer, 1, length, fp);           /* read it all */     fclose(fp);      buffer[length] = '';                  /* terminate the string with null*/     return buffer; }   int main(int argc, char **argv) {     if (argc < 3) {         printf("Usage: FileName BadChars");         return 0;     }      char* filename = argv[1];     char* badchars = argv[2];      char *buffer = get_buffer(filename);      if (buffer == NULL) {         return -1;     }      const char *beg = buffer;               /* current word boundaries */     size_t size = 0;      const char *mbeg = beg;                 /* result word */     size_t msize = 0;      while (beg[size]) {         beg += size + 1;                 /* +1 to skip the '\n' */         size = get_next_word_size(beg);  /* get the size of the next word */          if (size > msize && is_legal(beg, size, badchars)) { /* if it is a fit, save it */             mbeg = beg;             msize = size;         }     }      printf("%.*s\n", msize, mbeg);  /* print the output */      free(buffer);     return 0; } 

I would especially appreciate comments regarding the way the code reads the entire file into a single dynamically allocated array. About if and how it could be improved. I wouldn’t like to sacrifice the performance, but some “best practices” especially about this part are very welcome.