Some modules use code versions newer or older than the database. First update the module code, then run ‘setup:upgrade’

The module code base doesn’t match the DB schema and data. Magento_Theme schema: 2.0.1 -> 2.0.0
Magento_Theme data: 2.0.1 -> 2.0.0
Magento_Customer schema: 2.0.9 -> 2.0.6
Magento_Customer data: 2.0.9 -> 2.0.6
Magento_Cms schema: 2.0.1 -> 2.0.0
Magento_Cms data: 2.0.1 -> 2.0.0
Magento_Catalog schema: 2.1.3 -> 2.0.3
Magento_Catalog data: 2.1.3 -> 2.0.3
Magento_Search schema: 2.0.4 -> 2.0.1
Magento_Search data: 2.0.4 -> 2.0.1
Magento_Quote schema: 2.0.3 -> 2.0.2
Magento_Quote data: 2.0.3 -> 2.0.2
Magento_Msrp schema: 2.1.3 -> 2.0.0
Magento_Msrp data: 2.1.3 -> 2.0.0
Magento_Bundle schema: 2.0.2 -> 2.0.1
Magento_Bundle data: 2.0.2 -> 2.0.1
Magento_Downloadable schema: 2.0.1 -> 2.0.0
Magento_Downloadable data: 2.0.1 -> 2.0.0
Magento_Sales schema: 2.0.3 -> 2.0.1
Magento_Sales data: 2.0.3 -> 2.0.1
Magento_CatalogInventory schema: 2.0.1 -> 2.0.0
Magento_CatalogInventory data: 2.0.1 -> 2.0.0
Magento_GroupedProduct schema: 2.0.1 -> 2.0.0
Magento_GroupedProduct data: 2.0.1 -> 2.0.0
Magento_Integration schema: 2.2.0 -> 2.0.1
Magento_Integration data: 2.2.0 -> 2.0.1
Magento_CatalogRule schema: 2.0.1 -> 2.0.0
Magento_CatalogRule data: 2.0.1 -> 2.0.0
Magento_SalesRule schema: 2.0.1 -> 2.0.0
Magento_SalesRule data: 2.0.1 -> 2.0.0
Magento_Swatches schema: 2.0.1 -> 2.0.0
Magento_Swatches data: 2.0.1 -> 2.0.0
Magento_GiftMessage schema: 2.0.1 -> 2.0.0
Magento_GiftMessage data: 2.0.1 -> 2.0.0
Some modules use code versions newer or older than the database. First update the module code, then run ‘setup:upgrade’.

Upgrading Drupal 7.43 to 8 error: Source database is Drupal version 8 but version 7 was selected

I am new to Drupal, but I am trying to upgrade a site with the version 7.43 to 8, flowing these steps:

After installing the Migrate modules and defining the source site, I have the following error message:

Source database is Drupal version 8 but version 7 was selected.

In my old site’s database, the drupal_version table is empty.

Any help?

Should I use entity framework for CRUD and let the database handle the complexity that comes with high end queries?

I am new to ef and liking it since it reduces the overhead of writing common queries by replacing it with simple add, remove functions. Agreed.

Today I got into the argument with my colleague who has been using it for a while and approached him for advice on when to use Stored Procedures and when to use EF and for what?

He replied;

Look, the simple thing is that you can use both but what’s the point of using an ORM if you are doing it all in database i.e. stored procedures. So, how would you figure out what to do where and why? A simple formula that I have learned that use ORM for all CRUD operation and queries that require 3-4 joins but anything beyond that you better use stored procedures.

I thought, analyzed and replied;

Well, this isn’t the case, I have seen blogs and examples where people are doing massive things with EF and didn’t need to write any procedure.

But he’s stubborn and calling it performance overhead which is yet beyond my understanding since I am relatively new as compared to him.

So, my question is that whether you only need to handle CRUD in ef or should do a lot more in EF as a replacement of stored procedures.

Managing multiple dynamic database connections

I’m working on a server, which you can pass some form of authentication as input (like connection string) and it will connect you to your database. So the DB connection is going to be dynamic. There can be multiple users at the same time, connecting to different databases.

What I’m wondering is, is there a preferred way of managing the connections? Should the DB Client be stored in memory after authentication, so each user can immediately retrieve it using their session data / and execute queries against it? Or should I close / reopen the connection every time the user wanted to do something. I can use JS to figure out if the user is active on the page / or left and get rid of the connection object using users’ state as well.

Approach 1

  • User signs in to our web application.
  • User enters the credentials to database (like the connection string)
  • Server authenticates against the DB and we now have the connection client object. We keep it in a dictionary mapped to user id.
  • User wants to run a query. We determine the user id from the request, fetch the client from the memory and run the query.
  • When user leaves the page, we detect it through JS (unload event) send a request / or socket packet to server and close the client + remove it from the dictionary.

Approach 2

  • User signs in to our web application.
  • User enters the credentials to database (like the connection string)
  • Server authenticates against the DB and we just confirm that connection worked. We don’t keep the client object in memory.
  • User wants to run a query. We re-connect to the database, run the query and close the connection. No dictionaries are kept in memory, we reconnect every time the user wants to do something.

What do you think?

How to ease the pain of lack of diffs when using database migrations?

The pain that I’ve often felt when creating database migration files, is best described in this Speakerdeck: Sane Database Change Management with Sqitch.

  • Paste entire function to new “up” script
  • Edit the new file
  • Copy the function to the new “down” script
  • Three copies of the function!

And I end up with no clear diff of the function that I can easily git-blame to understand the change later in time.

I feel too that

sync-based approach to schema migrations is much better.

I stand before this new greenfield project (as the only developer). The stack I’ve chosen is Postgres (on AWS RDS), Node.js, Express, Apollo (GraphQL) and React (on Heroku).

I have read about sqitch and migra, but never used them. Are those the cure to the pain I’ve felt? Are they compatible with the stack I’m using for this new project or what other sync-based migration tool is best compatible with this stack?

My current workflow is like this. Dev and production database models are the same. A new story arrises. An existing database function needs to be altered. I create a new migration file. I copy the function that needs to be altered into the “up” and again into the “down” part of the new migration. I commit. I alter the “up” part. I commit (creates diff).

This all feels very verbose just when only a few lines in the function needed to be changed. Ideally, I have the whole schema in code in git. I alter the schema. I commit (creating a diff in git history). A tool then helps to generate the required SQL statements and makes a new up and down migration file for me. I don’t mind that I need to verify the generated migration file.

Oracle Database 12c Re2 installation, directory permissions (Ubuntu 16.04)

I’m trying to install oracle database 12c Re2 on my Ubuntu 16.04, but I failed.

When choosing the directory, where the Oracle Base, the Software and the Database file should be located, I got the error INS32012 (check your directory permission).

I used following install guide until the GUI appears ( At that point my configuration changes, because I use a Desktop Class. Nevertheless, I also tried with the Server Class and it didn’t work neither.

Could somebody tell me, if there is something missing in this guide? How can I solve this problem?


Exchange Server – (permanently) delete mails from database

We have a problem with very large OST files of outlook. What we do is archiving the mails from the Exchange Server with a 3rd party software and delete them from the Exchange server afterwards.

Nonetheless the OST file size stays the same.

Do I really need to put the server in Maintenence mode (I have a technet article that details that over like 20 steps, or is there a faster method to accomblish that task?

Select Column Values from all the tables of database ORACLE

I am trying to fetch column value from all the tables of database where column value matches.

Table A CAMPUS_ID 1  Table B CAMPUS_ID 1 

Expected Result

TABLE   VALUE A       1  TABLE   VALUE B       1  



Error Invalid Identifier

Inner query return the correct tables.