processor type differences between elasticache instance types , will all instances suffer the same way?

This may sound like yet another, “what is a vcpu” or “should I use elasticache, remember its single threaded!!” question — but I think this asks a question unanswered.

redis is single threaded, means only one core will ever be used.

elasticache is managed service for redis. AWS offers multiple instance types for elasticache with varying memory sizes and amounts of vcpus.

AWS is aware that even though the instance has more than one vcpu, performance is capped at only one of those vcpus.

So here’s my question: Are all instance types suffering the same performance loss? Naturally, if a computer has a 2 GHz processor, and another computer has a latest gen 3 GHz processor — the latter would perform better.

However, AWS has its mystery vcpu that doesn’t expose its processor model or clock speed. Naturally, the 0.729/hour cache.r3.8xlarge should perform better than the 0.046/hour cache.r4.large. But, despite one having 2 vcpus, and the other having many many more (site doesnt list the number) .. they would both only use 1 vcpu. Which, logically, I would think would be the same performance.

I am unable to believe that a processor supporting 32+ GB memory would perform the same way a processor supporting 2 GB would. At the higher price ranges, I have to believe the processors are much higher grade than the entry level price ranges.

If trying to convert a present instance to elasticache, it would seem that one could match an underpowered inappropriate elasticache. Even though memory was exceeding their requirements. Or the contrary, vastly overpay for an unnecessary cache that again still meets memory requirements.

Is the CPU performance truly the same (somehow) ? If it is, then the super pricey cost of a cache applies only to memory and network; ignoring CPU entirely..

Feedback on annotation processor for kotlin [on hold]

I recently finished writing my first annotation processor for kotlin (and in general :P). Its purpose is to reduce boiler plate code to convert one similar data class into another which shares most of the fields.

This project was curiosity driven, and I only have one use case for it. But now I’m wondering if it could be useful (or made useful) for anyone else.

Here is how you would use it.

package com.example  import com.ltrojanowski.morph.api.Morph  // target @Morph(from = [Boo::class, Baz::class, Bar::class]) data class Foo(val b: Double, val a: String, val c: Int, val d: Float, val e: List<String>)  @Morph(from = [Boo::class]) data class Fiz(val b: Double, val a: String, val c: Int, val d: Float, val e: List<String>)   // sources data class Boo(val a: String, val b: Double, val c: Int, val d: Float, val e: List<String>) data class Baz(val a: String, val b: Double, val c: Int, val d: Float) data class Bar(val a: String?, val b: Double?, val c: Int, val d: Float, val e: List<String>)  fun main(args: Array<String>) {      val boo = Boo("a", 1.0, 2, 3.0f, listOf("from boo"))     val baz = Baz("a", 1.0, 2, 3.0f)     val bar = Bar(null, null, 2, 3.0f, listOf("from bar"))      val fooBuilder: FooMorphBuilder.() -> Unit = {}     val foo1 = boo.into<Foo>(fooBuilder).morph()     assert(foo1.a == boo.a)     assert(foo1.b == boo.b)     assert(foo1.c == boo.c)     assert(foo1.d == boo.d)     assert(foo1.e == boo.e)     val foo2 = baz.into<Foo>{         e = listOf("inserted manually")     }.morph()     assert(foo2.a == boo.a)     assert(foo2.b == boo.b)     assert(foo2.c == boo.c)     assert(foo2.d == boo.d)     assert(foo2.e == listOf("inserted manually"))     val foo3 = bar.into<Foo>{         a = a ?: "A if null"         b = b ?: 0.0     }.morph()     assert(foo3.a == "A if null")     assert(foo3.b == 0.0)     assert(foo3.c == bar.c)     assert(foo3.d == bar.d)     assert(foo3.e == bar.e)     val fizBuilder: FizMorphBuilder.() -> Unit = {}     val fiz = boo.into<Fiz>(fizBuilder).morph()     assert(fiz.a == boo.a)     assert(fiz.b == boo.b)     assert(fiz.c == boo.c)     assert(fiz.d == boo.d)     assert(fiz.e == boo.e)  } 

The project is hosted on github. The easieist way to add it to your project would probably be using jitpack.io So in your repositories add

repositories {     mavenCentral()     maven { url 'https://jitpack.io' } } 

In your dependencies

dependencies {     implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk8"     implementation "com.github.ltrojanowski.morph:morph-api:-SNAPSHOT"     kapt "com.github.ltrojanowski.morph:morph-compiler:-SNAPSHOT" } 

Besides the feedback on usability and making it more useful please point out issues with the code. I am a beginner with annotation processors and kotlin.

Intel® Core™ i5-7360U Processor bottleneck for an RTX 2060 GPU

I am planning to buy an RTX 2060 GPU, primarily for deep learning. I have a MacBook Pro 13-Inch (Mid-2017 Retina Display), which has a Intel® Core™ i5-7360U Processor. I read at certain places on the Internet that a lower FPS CPU can present a bottleneck to the newer RTX GPU. Is this true, and if so would the older generation GTX 1060 be okay with this CPU architecture?

Android apps for “armeabi-v7a” and “x86” architecture: SoC vs. Processor vs. ABI

While downloading Android apps, sometimes I have seen apps for armeabi-v7a and x86 architecture.

I read some references for armeabi-v7a and x86 architecture. However, at the end, I couldn’t finalize which mobile processors and architectures belong to armeabi-v7a and which belong to x86.

As per my knowledge, mobile processors commonly used in Android devices are Snapdragon (by Qualcomm), MediaTek, Exynos (by Samsung) and Kirin (by Huawei). Almost all brands explain specifications of a smartphone and almost all specifications say mobile processor is 64-bit or not. Should I conclude that 64-bit of mobile processors (Snapdragon, MediaTek, Exynos or Kirin) belong to ARM architecture?

EDIT:
To understand which SoC supports armeabi-v7a Android apk and which SoC supports x86 Android apk, I have gone through the specifications of MediaTek Helio X30 and Snapdragon 855. The specification of Helio X30 says, it supports dual-core ARM Cortex-A73 and quad-core ARM Cortex-A53 but ARM is not mentioned anywhere in the specification of Snapdragon 855. So should I conclude that Helio X30 will support armeabi-v7a Android apps and Snapdragon 855 will not support armeabi-v7a apps?

Please clarify my confusions.

Ubuntu with Ryzen 2500u Processor speed capped at 2GHz

My processor does not runs at its full clock speed on Ubuntu, I also tried it on Arch Distros like Manjaro and faced the same issue. Below are the details of my system and what I have tried till now.

I installed Ubuntu 18.04

4.15.0-43-generic alongside of Windows 10 Home on

Acer Swift 3 SF315-41

The Max CPU speed and other beginner details are as follows:

 Architecture:        x86_64 CPU op-mode(s):      32-bit, 64-bit Byte Order:          Little Endian CPU(s):              8 On-line CPU(s) list: 0-7 Thread(s) per core:  2 Core(s) per socket:  4 Socket(s):           1 NUMA node(s):        1 Vendor ID:           AuthenticAMD CPU family:          23 Model:               17 Model name:          AMD Ryzen 5 2500U with Radeon Vega Mobile Gfx Stepping:            0 CPU MHz:             1574.846 CPU max MHz:         2000.0000 CPU min MHz:         1600.0000 BogoMIPS:            3992.66 Virtualization:      AMD-V L1d cache:           32K L1i cache:           64K L2 cache:            512K L3 cache:            4096K NUMA node0 CPU(s):   0-7 Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx hw_pstate sme ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca 

However in windows the max processor speed is 3.6GHz

This max speed is also possible in Ubuntu as:

$   dmidecode -t processor | grep Speed     Max Speed: 3600 MHz     Current Speed: 2000 MHz 

On some research I found that the Legacy Bootloader should be enabled and EFI should be disabled for the turbo frequencies to work normally. But my laptop manufacturer does not allows that and it has a primary Windows 10 OS.

Also is there any chance that the AMD microcode is not loaded in the init? If so, then how to solve this?

Any approach towards the problem or alternate solution would be helpful.

Need help vetting for best payment processor

Hi,

I'm new to the merchant payment processing world and I'd like to become an agent/reseller. I've been introduced to several solutions some of which want me to pay a monthly fee to bring them more merchants. What I want to know is what's the best way to find (really important) and vet (most important) these companies so I do right by the merchants I try to sign up? Are there like review sites for this type of thing? I'm also looking for a company willing to get me up to speed to bring…

Need help vetting for best payment processor