How to ./configure a built for Raspberry Pi?

I want to use this guide to set up a headless bitcoin core full node and came across the line:

./configure CPPFLAGS="-I/usr/local/BerkeleyDB.4.8/include -O2" LDFLAGS="-L/usr/local/BerkeleyDB.4.8/lib" --disable-wallet 

Unfortunately, the blog does not explain why he uses this.

First of all, I want to use my node with wallet so I probably should not use that option. Second, I want to configure it to be usefull with a Raspberry Pi which has only 1GB of RAM and is also running other stuff, for example my Logitech Media Server for audio streaming.

I know that I can use a config file for running bitcoind with entries for upload limit and other setups. Do I need to run the skript also with options or are both setups interchangeably?

ALso, I get the error Found Berkeley DB other than 4.8 upon configure. This would also be important to know, if I can have a wallet with the BDB other than 4.8 or if I have to install 4.8 (and how).

How to configure Xcode external build system to build and clean using standard short-cuts?

[Xcode 10.1, MacOS 10.14.1]

I have a project that uses bmake (could be any make though) and the Makefile provides a number of targets. I would like to use Xcode to build host and clean the build folder, but I’m having trouble working out how configure Xcode to allow me to this.

From the command line, I would build using bmake host and clean using bmake clean. The reason I’m using Xcode for this is because I like to use an IDE for debugging.

In Project -> Info (External Build Tool Configuration), I have:

Build Tool  :  /usr/local/bin/bmake Arguments   :  host Directory   :  None     <- I'm using the current path 

With these settings, Product -> Build builds my target, but Product -> Clean Build Folder does nothing even though Xcode reports that the clean succeeded.

In order to actually do a clean, I either need to define another target with the Arguments field set to clean and then switch between targets when building/cleaning, or, use a single target and change the argument field depending on whether I’m building or cleaning. (A really clumsy way of going about it.)

If I leave Arguments with it’s default value $ (ACTION) all targets get built (except clean), and cleaning does nothing useful.

I’ve read https://stackoverflow.com/questions/15652316/setup-xcode-for-using-external-compiler but that question does not address this problem.

Is there a better way of doing this?

How to properly configure multicast message redistribution around the Artemis cluster

I’m using Artemis 2.8.0.

I’ve started two standalone servers in symmetric cluster mode and deployed address with type ‘multicast’ on both of them also I’ve created couple of predefined queues attached to this address. When I wrote messages to address on first server it successfully wroted to all queues attached to address. After that I connected to second server and created consummer for one of a queues and messages from first server didn’t redistribute to second.

I can’t realize is it expected behaviour or not ?

I had tried connect consummer by FQQN too but result was the same. In documentation there isn’t any special information about ‘multicast’ redistribution.

my broker.xml looks like

<?xml version='1.0'?> <configuration xmlns="urn:activemq"                xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"                xmlns:xi="http://www.w3.org/2001/XInclude"                xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">     <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"          xsi:schemaLocation="urn:activemq:core ">        <name>server1</name>       <cluster-user>artemis</cluster-user>       <cluster-password>artemis</cluster-password>       <persistence-enabled>true</persistence-enabled>       <journal-type>ASYNCIO</journal-type>       <paging-directory>data/paging</paging-directory>       <bindings-directory>data/bindings</bindings-directory>       <journal-directory>data/journal</journal-directory>       <large-messages-directory>data/large-messages</large-messages-directory>       <journal-datasync>true</journal-datasync>       <journal-min-files>2</journal-min-files>       <journal-pool-files>10</journal-pool-files>       <journal-file-size>10M</journal-file-size>       <journal-buffer-timeout>20000</journal-buffer-timeout>       <journal-max-io>4096</journal-max-io>       <disk-scan-period>5000</disk-scan-period>       <max-disk-usage>90</max-disk-usage>       <critical-analyzer>true</critical-analyzer>       <critical-analyzer-timeout>120000</critical-analyzer-timeout>       <critical-analyzer-check-period>60000</critical-analyzer-check-period>       <critical-analyzer-policy>HALT</critical-analyzer-policy>        <acceptors>          <acceptor name="artemis">tcp://0.0.0.0:61716?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>          <acceptor name="cluster-acceptor">tcp://0.0.0.0:61717</acceptor>       </acceptors>       <connectors>          <connector name="netty-connector">tcp://localhost:61616</connector>          <connector name="cluster-connector">tcp://localhost:61617</connector>       </connectors>        <cluster-connections>           <cluster-connection name="k24-artemis-cluster">             <address></address>             <connector-ref>netty-connector</connector-ref>             <check-period>5000</check-period>             <retry-interval>500</retry-interval>             <use-duplicate-detection>true</use-duplicate-detection>             <message-load-balancing>ON_DEMAND</message-load-balancing>             <max-hops>1</max-hops>             <static-connectors>               <connector-ref>cluster-connector</connector-ref>             </static-connectors>           </cluster-connection>         </cluster-connections>        <security-settings>          <security-setting match="#">             <permission type="createNonDurableQueue" roles="amq"/>             <permission type="deleteNonDurableQueue" roles="amq"/>             <permission type="createDurableQueue" roles="amq"/>             <permission type="deleteDurableQueue" roles="amq"/>             <permission type="createAddress" roles="amq"/>             <permission type="deleteAddress" roles="amq"/>             <permission type="consume" roles="amq"/>             <permission type="browse" roles="amq"/>             <permission type="send" roles="amq"/>             <!-- we need this otherwise ./artemis data imp wouldn't work -->             <permission type="manage" roles="amq"/>          </security-setting>       </security-settings>        <address-settings>          <!-- if you define auto-create on certain queues, management has to be auto-create -->          <address-setting match="activemq.management#">             <dead-letter-address>DLQ</dead-letter-address>             <expiry-address>ExpiryQueue</expiry-address>             <redelivery-delay>0</redelivery-delay>             <!-- with -1 only the global-max-size is in use for limiting -->             <max-size-bytes>-1</max-size-bytes>             <message-counter-history-day-limit>10</message-counter-history-day-limit>             <address-full-policy>PAGE</address-full-policy>             <auto-create-queues>true</auto-create-queues>             <auto-create-addresses>true</auto-create-addresses>             <auto-create-jms-queues>true</auto-create-jms-queues>             <auto-create-jms-topics>true</auto-create-jms-topics>          </address-setting>          <!--default for catch all-->          <address-setting match="#">             <dead-letter-address>DLQ</dead-letter-address>             <expiry-address>ExpiryQueue</expiry-address>             <redelivery-delay>0</redelivery-delay>             <max-size-bytes>-1</max-size-bytes>             <message-counter-history-day-limit>10</message-counter-history-day-limit>             <address-full-policy>PAGE</address-full-policy>             <auto-create-queues>true</auto-create-queues>             <auto-create-addresses>true</auto-create-addresses>             <auto-create-jms-queues>true</auto-create-jms-queues>             <auto-create-jms-topics>true</auto-create-jms-topics>          </address-setting>          <address-setting match="k24.#">             <redistribution-delay>0</redistribution-delay>             <max-delivery-attempts>100</max-delivery-attempts>             <redelivery-delay-multiplier>1.5</redelivery-delay-multiplier>             <redelivery-delay>5000</redelivery-delay>             <max-redelivery-delay>50000</max-redelivery-delay>             <send-to-dla-on-no-route>true</send-to-dla-on-no-route>             <auto-create-addresses>true</auto-create-addresses>             <auto-delete-addresses>true</auto-delete-addresses>             <auto-create-queues>true</auto-create-queues>             <auto-delete-queues>true</auto-delete-queues>             <default-purge-on-no-consumers>false</default-purge-on-no-consumers>             <max-size-bytes>104857600</max-size-bytes><!--100 Mb-->             <page-size-bytes>20971520</page-size-bytes><!--20 Mb-->             <address-full-policy>PAGE</address-full-policy>           </address-setting>       </address-settings>        <addresses>          <address name="DLQ">             <anycast>                <queue name="DLQ" />             </anycast>          </address>          <address name="ExpiryQueue">             <anycast>                <queue name="ExpiryQueue" />             </anycast>          </address>         <address name="k24.payment">           <multicast>             <queue name="k24.payment.bossbi">               <durable>true</durable>             </queue>              <queue name="k24.payment.other">               <durable>true</durable>             </queue>           </multicast>         </address>       </addresses>    </core> </configuration>  

I suppose Artemis should redistribute messages from all queues attached to multicast address from first server to second when there are consummers on second server.

How do you configure a dynamic block for a customer segment in M2C?

Using Magento Commerce (nee EE) 2.3.1, how do you configure a Dynamic Block to render for a specific customer segment?

The stock Magento documentation:

https://docs.magento.com/m2/ee/user_guide/cms/dynamic-blocks.html

does not give you much to work with if it’s not working.

I have a Customer Segment set up for any customer that has placed an order (which has 1 customer record – using the sample data). I have a Cart Rule for 20% off, I want to show a Dynamic Block to that customer segment to promote the Cart Rule.

I have the Dynamic Block set up, and dropped into a CMS page: enter image description here

screenshots

but when I log in as the customer in the customer segment, no promo Dynamic Block appears on the CMS page.

I’ve reindexed and cleared the cache.

How to configure ELK on one server and Filebeat on another server?

I have 2 servers with Ubuntu 18.04 :

  • monitoring.example.com with ELK
  • www.example.com with my site in production and Filebeat

Here is the configuration of the ELK server :

https://docs.google.com/document/d/15B5m3fsjoWTe1F4ZnurpMo-mJ_nRBGbZKcxCsRRcE6o/edit

https://pastebin.com/Bnz0bbMr

Here is the Filebeat server configuration :

https://docs.google.com/document/d/1uP4m5PBKiO2VD5oskJKYQ4OlKg24SICbBxjfhicVXx4/edit

https://pastebin.com/C2cz6RVa

Here is the result when I test the port :

https://pastebin.com/JyuKWWCp

How to configure ELK on one server and Filebeat on another server ?

Setting PKG_CONFIG_PATH for gegl library still didn’t allow configure to proceed

I’m installing GIMP 2.8 (I need that version in order to use a plugin not available with GIMP 2.10). There is one last library needed, it asks for gegl-0.2.

My system has gegl-0.3 so I set up the pck-config to look in the folder where the gegl-0.3.pc file is.

~/Downloads/gimp-2.8.22$   echo $  PKG_CONFIG_PATH /usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu/:/usr/lib/x86_64-linux-gnu/gegl-0.3/:/usr/lib/x86_64-linux-gnu/pkgconfig:/usr/lib/x86_64-linux-gnu/pkgconfig/ 

I made a couple of mistakes but that list should cover it, I think. However, running ./configure still ends with the error No package 'gegl-0.2' found

What is wrong here? Does it actually need the gegl-0.2 version?

Linux multipath: How to configure a single multibus path group

I have just upgraded a functional Ubuntu 16.04 host to 18.04 and am now having issues with multipath.

Package versions: * multipath-tools 0.7.4-2ubuntu3 * open-iscsi 2.0.874-5ubuntu2.7

I have a Dell PowerVault MD3860i with four paths to the host. Before the upgrade, multipath -ll looked like this:

backupeng (3600a098000b5efae00000e9a5b9b58f5) dm-2 DELL,MD38xxi size=8.0T features='0' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active   |- 3:0:0:1 sdb 8:16 active ready running   |- 4:0:0:1 sdc 8:32 active ready running   |- 5:0:0:1 sdd 8:48 active ready running   `- 6:0:0:1 sde 8:64 active ready running 

Now it looks like this:

backupeng (3600a098000b5efae00000e9a5b9b58f5) dm-2 DELL,MD38xxi size=8.0T features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw |-+- policy='round-robin 0' prio=14 status=active | |- 5:0:0:1 sdd 8:48 active ready running | `- 6:0:0:1 sde 8:64 active ready running `-+- policy='round-robin 0' prio=9 status=enabled   |- 3:0:0:1 sdb 8:16 active ready running   `- 4:0:0:1 sdc 8:32 active ready running 

My /etc/multipath.conf looks like this:

defaults {     user_friendly_names yes     path_selector "round-robin 0"     path_grouping_policy multibus }  multipaths {     multipath {         wwid 3600a098000b5efae00000e9a5b9b58f5         alias backupeng     } } 

For performance reasons, I need to have all paths in the same path group, like they were before. My understanding is that path_grouping_policy multibus is supposed to do this. I have tried restarting multipathd, setting up the iscsi and multipath configs on the host from scratch, and so on for the past few hours.

I can paste the full output of multipathd -k -> show config but what I’m seeing in there agrees with my multipath.conf file. Is there any other information I can provide?

External keyboard doesn’t show as separate keyboard to configure

MBP 14,1 (13″ 2017 two TB ports)

At work I have a Dell USB-C dock that my external keyboard connects to – display is connected differently. There, like everywhere else I’ve seen this, when I connect an external (normal PC) keyboard it asks me to identify it (“press the key next to shift”) and after that I can configure the external keyboard separately from the internal one (I like to switch alt+cmd on the PC layout).

I have a new LG screen (27UK850-W) at home that connects to the MBP via USB-C. It charges the device and transports display on that connection. The screen also has two USB-A ports – they work (I connect a mouse and my default external keyboard), but the keyboard doesn’t show up as a “separate” device – if I remap the keys (alt+cmd in my case) this is also done for the built-in keyboard.

Any ideas how I can work around this?

how to configure SVI in linux machine?

I want to configure SVI on linux machine. As far as I understand, using SVI we can communicate between VLANs.

I want to implement a simple use case for this.

Lets say I have two VLANs with ids 100 and 200. I added two interfaces each to both VLANs.

vlan100: eth1(10.10.0.10), eth2(10.10.0.20)

vlan200: eth3(20.20.0.30), eth4(20.20.0.40)

What should be the next steps in order to achieve communication between VLANs , say ping from eth3 to eth1 (ping -I eth3 10.10.0.10) should work?