Rendering a ID3D11Texture2D into a SkImage (SkiaSharp/Avalonia)

I’m currently trying to create an interop layer to render my render target texture into a Skia SkImage. This is being done to facilitate rendering from my graphics API into Avalonia.

I’ve managed to piece together enough code to get everything running without any errors (at least, none that I can see), but when I draw the SkImage I see nothing but a black image.

Of course, these things are easier to describe with code:

private EglPlatformOpenGlInterface _platform; private AngleWin32EglDisplay _angleDisplay; private readonly int[] _glTexHandle = new int[1];  IDrawingContextImpl context // <-- From Avalonia  _platform = (EglPlatformOpenGlInterface)platform; _angleDisplay = (AngleWin32EglDisplay)_platform.Display;  IntPtr d3dDevicePtr = _angleDisplay.GetDirect3DDevice();  // Device5 is from SharpDX. _d3dDevice = new Device5(d3dDevicePtr);    // Texture.GetSharedHandle() is the shared handle of my render target. _eglTarget = _d3dDevice.OpenSharedResource<Texture2D>(_target.Texture.GetSharedHandle());  // WrapDirect3D11Texture calls eglCreatePbufferFromClientBuffer. _glSurface = _angleDisplay.WrapDirect3D11Texture(_platform, _eglTarget.NativePointer);  using (_platform.PrimaryEglContext.MakeCurrent()) {                    _platform.PrimaryEglContext.GlInterface.GenTextures(1, _glTexHandle); }  var fbInfo = new GRGlTextureInfo(GlConsts.GL_TEXTURE_2D, (uint)_glTexHandle[0], GlConsts.GL_RGBA8); _backendTarget = new GRBackendTexture(_target.Width, _target.Height, false, fbInfo);              using (_platform.PrimaryEglContext.MakeCurrent()) {                    // Here's where we find the gl surface to our texture object apparently.    _platform.PrimaryEglContext.GlInterface.BindTexture(GlConsts.GL_TEXTURE_2D, _glTexHandle[0]);     EglBindTexImage(_angleDisplay.Handle, _glSurface.DangerousGetHandle(), EglConsts.EGL_BACK_BUFFER);     _platform.PrimaryEglContext.GlInterface.BindTexture(GlConsts.GL_TEXTURE_2D, 0); }  // context is a GRContext _skiaSurface = SKImage.FromTexture(context, _backendTarget, GRSurfaceOrigin.BottomLeft, SKColorType.Rgba8888, SKAlphaType.Premul);  // This clears my render target (obviously). I should be seeing this when I draw the image right? _target.Clear(GorgonColor.CornFlowerBlue);  canvas.DrawImage(_skiaSurface, new SKPoint(320, 240)); 

So, as far as I can tell, this should be working. But as I said before, it’s only showing me a black image. It’s supposed to be cornflower blue. I’ve tried calling Flush on the ID3D11DeviceContext, but I’m still getting the black image.

Anyone have any idea what I could be doing wrong?

Rendering overlapping normal map textures to a 2d scene efficiently

I am using modern OpenGL to render a 2D non grid/tiled world map. I’ve generated some simple normal map textures to render over the base world map to provide terrain elevation/detail shading. Terrain is not tiled (triangulated from noise), so the majority of these terrain elevation features can overlap. This is good as it gives a more continuous appearance to mountains etc.

However the normal map shader needs to sample the base terrain color and apply the lighting value to it before returning the output color. So I can’t render two overlapping normal textures without the second ‘cutting out’ the first where they overlap (the second texture render cannot see the output of the first).

My solution to this was to use a 3 pass render to texture via FBO. I attempt to divide the normal textures destination locations into 3 non overlapping groups and render 1 group in each pass. This works to a point, but of course where a texture overlaps more than 3 neighbours the cut out problem remains.

I could just increase the number of passes/groups to 4, 5, 6… and perhaps this will resolve most cut out issues. While this would probably still provide reasonable performance on my system I am guessing there is a limit where integrated graphics cards may struggle.

Is there an alternative solution for this that could scale better to lower end systems? Or is perhaps even a 5 or 6 pass full-screen render viable on even integrated graphics these days?


The first render pass draws the entire 2D scene (minus terrain shading) and is obviously the slowest, but it happens only once. The remaining passes make a screen size copy of the previous pass to a second texture (rendering to a screen sized quad) and then render normal textures to this copy using the first texture as input. Adding more passes would only repeat the screen sized texture copy and normal texture rendering.

Vertical column rendering

Can someone advice me a way to display product informations (columns) in Vertical rendering like this picture Vertical column rendering

I do not even know how to find plugins on wordpress … if someone knows one I will take it PS: I’m not talking about how to display products as a table list like ninjaTable or tablePress.. but only one product ..thank you

How do I prevent the page from rendering the header seconds before the content?

Dealing with a rough issue here. Currently have no site optimization configured (that I am aware of) but for all pages on the site, the header loads a significant amount of time before the rest of the page content. This is causing a large issue with CLS on Google, as the page shifts when the body of the page is rendered. Is there any way that I can enforce them to load at the same time? This only became an issue recently.

Is unicode character encoding a safe alternative for html encoding when rendering unsafe user input to html?

I am building a web application in which a third party library is used, which transforms the user input into JSON and sends it to an controller action. In this action, we serialize the input using the standard Microsoft serialize from the System.Text.Json namespace.

public async Task<IActionResult> Put([FromBody]JsonElement json) {     string result = JsonSerializer.Serialize(json); } 

However currently, the json is rendered back to the page, within a script block and using @Html.Raw(), which raised an alarm with me, when I reviewed the code.

While testing if this creates an opening for script injection, I added


to the input. This input is transformed into


when serialized.

This look fine. Rendering this to the page does not result code execution, when I tested that.

So, is unicode character encoding really a good protection against script injection, or should I not rely on it?

Is it conceivable that unicode encoding is lost somewhere during processing? Like (de)serializing once more, etc?

This seems like a question that has been asked and answered before, but I couldn’t find it.

Text Rendering using FreeType library not working correctly

Currently I am implementing text rendering into my game engine using the FreeType library by following the tutorial found here: My current implementation is not working correctly, you can see the result of my implementation in the following Images (note that I am rendering the text “Test test”, and you can see 8 distinct cubic shapes one for each letter with a space in between the words, with the last 4 cubic shapes being smaller and shaped differently compared to the capitalized version of the word, so it looks like it is at least close to rendering the string “Test test”):

enter image description here

enter image description here

Firstly, there are some obvious issues, to start with you can see the “text” is drawn in a projection perspective rather than orthographic, this is on purpose however because my system already draws with projection perspective and if the text was stuck flat to my screen in a 2D manner I fail to see how the perspective it is drawn in would change anything.

That leads to the next problem, the “text” is not stuck to my screen in a 2D manner (like HUD elements in a game), it appears to be floating in 3D space, although if I look at the “Text” from exactly side on it will vanish, so it does not appear to have any depth (z axis), only a position on the x and y axis. Also if I go past the side on point and look from behind the “text” vanishes.

And finally the most obvious issue, the glyphs clearly are not rendered correctly, as you cant see the actual shape of the letters, instead just the cubic space containing the letter.

My implementation is as follows: (disclaimer: My engine is to big to explain every little thing that is going on so this question is showing minimal code and only relevant code to this issue, described in quite a high level manner)

In my engine the scene is created using a scene graph of GameObject‘s each with GameComponent‘s, therefore I create a TextRendererObject and add to it a TextRenderer component and add it to the scene in the following code:

Entity *textRendererObject = new Entity(...); TextRenderer  *Text; Text = new TextRenderer(50, 50); Text->Load("font/arial.ttf", 240); textRendererObject->AddComponent(Text); AddToScene(textRendererObject); 

The TextRenderer constructor, a struct that the .h file defines (used in load function) and the load function are as follows respectively:

TextRenderer::TextRenderer(GLuint width, GLuint height) :     TextShader("text")//creates text shader (text.glsl) {    GLuint VAO, VBO;      SetIsTextRenderer(true);      glGenVertexArrays(1, &this->VAO);     glGenBuffers(1, &this->VBO);     glBindVertexArray(this->VAO);     glBindBuffer(GL_ARRAY_BUFFER, this->VBO);     glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 6 * 4, NULL, GL_DYNAMIC_DRAW);     glEnableVertexAttribArray(0);     glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), 0);     glBindBuffer(GL_ARRAY_BUFFER, 0);     glBindVertexArray(0); } 
/// Holds all state information relevant to a character as loaded using FreeType struct Character {     GLuint TextureID;   // ID handle of the glyph texture     glm::ivec2 Size;    // Size of glyph     glm::ivec2 Bearing; // Offset from baseline to left/top of glyph     GLuint Advance;     // Horizontal offset to advance to next glyph }; //... std::map<GLchar, Character> Characters; 
void TextRenderer::Load(std::string font, GLuint fontSize) {     // First clear the previously loaded Characters     this->Characters.clear();     // Then initialize and load the FreeType library     FT_Library ft;     if (FT_Init_FreeType(&ft)) // All functions return a value different than 0 whenever an error occurred         printf("ERROR::FREETYPE: Could not init FreeType Library");//std::cout << "ERROR::FREETYPE: Could not init FreeType Library" << std::endl;     // Load font as face     FT_Face face;     if (FT_New_Face(ft, font.c_str(), 0, &face))         printf("ERROR::FREETYPE: Failed to load font");//std::cout << "ERROR::FREETYPE: Failed to load font" << std::endl;     // Set size to load glyphs as     FT_Set_Pixel_Sizes(face, 0, fontSize);     // Disable byte-alignment restriction     glPixelStorei(GL_UNPACK_ALIGNMENT, 1);     // Then for the first 128 ASCII characters, pre-load/compile their characters and store them     for (GLubyte c = 0; c < 128; c++)       {         // Load character glyph          if (FT_Load_Char(face, c, FT_LOAD_RENDER))         {             printf("ERROR::FREETYTPE: Failed to load Glyph");//std::cout << "ERROR::FREETYTPE: Failed to load Glyph" << std::endl;             continue;         }         // Generate texture         GLuint texture;         glGenTextures(1, &texture);         glBindTexture(GL_TEXTURE_2D, texture);         glTexImage2D(             GL_TEXTURE_2D,             0,             GL_RED,             face->glyph->bitmap.width,             face->glyph->bitmap.rows,             0,             GL_RED,             GL_UNSIGNED_BYTE,             face->glyph->bitmap.buffer         );         // Set texture options         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);          // Now store character for later use         Character character = {             texture,             glm::ivec2(face->glyph->bitmap.width, face->glyph->bitmap.rows),             glm::ivec2(face->glyph->bitmap_left, face->glyph->bitmap_top),             face->glyph->advance.x         };         Characters.insert(std::pair<GLchar, Character>(c, character));     }     glBindTexture(GL_TEXTURE_2D, 0);     // Destroy FreeType once we're finished     FT_Done_Face(face);     FT_Done_FreeType(ft); } 

The shader (text.glsl) that is created when the Textrenderer object is created is as follows:

#include "common.glh"  varying vec2 texCoord0; varying vec3 worldPos0;  #if defined(VS_BUILD) attribute vec3 position; attribute vec2 texCoord;  uniform mat4 T_model; uniform mat4 T_MVP;  void main() {     gl_Position = T_MVP * vec4(position.xy, 0.0, 1.0);     texCoord0 = texCoord;     worldPos0 = (T_model * vec4(position.xy, 0.0, 1.0)).xyz; }   #elif defined(FS_BUILD)  uniform sampler2D H_text; uniform vec3 H_textColor;  DeclareFragOutput(0, vec4); void main() {         vec4 sampled = vec4(1.0, 1.0, 1.0, texture2D(H_text, texCoord0).r);     vec4 color = vec4(H_textColor, 1.0) * sampled;     SetFragOutput(0, sampled * color); }   #endif 

Following this set-up of the textRenderObject game object, its Text game component and the text.glsl shader, every frame the following render function is called:

void TextRenderer::RenderTextRenderer(...) {     this->TextShader.Bind();//"text.glsl" created earlier     this->TextShader.UpdateUniformsTextRenderer(...);     RenderText("TEST test", 100, 100, 1);//responsible for drawing  } 

UpdateUniformsTextRenderer(...) is responsible for setting the values of the uniforms in text.glsl and is as follows:

void Shader::UpdateUniformsTextRenderer(Transform* transform, const RenderingEngine& renderingEngine, const Camera& camera) {     Matrix4f worldMatrix = transform->GetTransformation();     Matrix4f projectedMatrix = camera.GetViewProjection() * worldMatrix;     for (unsigned int i = 0; i < m_shaderData->GetUniformNames().size(); i++)     {         std::string uniformName = m_shaderData->GetUniformNames()[i];         std::string uniformType = m_shaderData->GetUniformTypes()[i];           if (uniformName.substr(0, 2) == "T_")         {             if (uniformName == "T_MVP")                 SetUniformMatrix4f(uniformName, projectedMatrix);             else if (uniformName == "T_model")                 SetUniformMatrix4f(uniformName, worldMatrix);             else                 throw "Invalid Transform Uniform: " + uniformName;         }         else if (uniformName.substr(0, 2) == "H_") {             if (uniformName == "H_text") {//Texture used to draw text                 int samplerSlot = renderingEngine.GetSamplerSlot(uniformName);                 SetUniformi(uniformName, samplerSlot);             }             else if (uniformName == "H_textColor")                 SetUniformVector3f(uniformName, Vector3f(1, 0, 0));//red         }     } } 

And finally the function RenderText that actually draws the text is as follows:

void TextRenderer::RenderText(std::string text, GLfloat x, GLfloat y, GLfloat scale) {     glActiveTexture(GL_TEXTURE0);     glBindVertexArray(this->VAO);      // Iterate through all characters     std::string::const_iterator c;     for (c = text.begin(); c != text.end(); c++)     {         Character ch = Characters[*c];          GLfloat xpos = x + ch.Bearing.x * scale;         GLfloat ypos = y + (this->Characters['H'].Bearing.y - ch.Bearing.y) * scale;          GLfloat w = ch.Size.x * scale;         GLfloat h = ch.Size.y * scale;         // Update VBO for each character         GLfloat vertices[6][4] = {             { xpos,     ypos + h,   0.0, 1.0 },             { xpos + w, ypos,       1.0, 0.0 },             { xpos,     ypos,       0.0, 0.0 },              { xpos,     ypos + h,   0.0, 1.0 },             { xpos + w, ypos + h,   1.0, 1.0 },             { xpos + w, ypos,       1.0, 0.0 }         };         // Render glyph texture over quad         glBindTexture(GL_TEXTURE_2D, ch.TextureID);         // Update content of VBO memory         glBindBuffer(GL_ARRAY_BUFFER, this->VBO);         glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertices), vertices); // Be sure to use glBufferSubData and not glBufferData          glBindBuffer(GL_ARRAY_BUFFER, 0);         // Render quad         glDrawArrays(GL_TRIANGLES, 0, 6);         // Now advance cursors for next glyph         x += (ch.Advance >> 6) * scale; // Bitshift by 6 to get value in pixels (1/64th times 2^6 = 64)     }     glBindVertexArray(0);     glBindTexture(GL_TEXTURE_2D, 0); } 

And that’s my implementation, can anyone see where I have gone wrong? Any feedback is much appreciated.

HLSL Deferred rendering PointLight issue

Hi everybody I’m Currently following this tutorial on Point lights in deferred rendering. I’m trying to implement a point light but it only is correct when the camera is at (0, 0, 0) what should be changed?

Here is the shader

float4x4 World; float4x4 View; float4x4 Projection;  //color of the light  float3 Color;   //position of the camera, for specular light float3 cameraPosition;    //this is used to compute the world-position float4x4 InvertViewProjection;   //this is the position of the light float3 lightPosition;  //how far does this light reach float lightRadius;  //control the brightness of the light float lightIntensity = 1.0f;  // diffuse color, and specularIntensity in the alpha channel texture colorMap;  // normals, and specularPower in the alpha channel texture normalMap; //depth texture depthMap;  sampler colorSampler = sampler_state { Texture = (colorMap); AddressU = CLAMP; AddressV = CLAMP; MagFilter = LINEAR; MinFilter = LINEAR; Mipfilter = LINEAR; }; sampler depthSampler = sampler_state { Texture = (depthMap); AddressU = CLAMP; AddressV = CLAMP; MagFilter = POINT; MinFilter = POINT; Mipfilter = POINT; }; sampler normalSampler = sampler_state { Texture = (normalMap); AddressU = CLAMP; AddressV = CLAMP; MagFilter = POINT; MinFilter = POINT; Mipfilter = POINT; };   struct VertexShaderInput { float3 Position : POSITION0; };  struct VertexShaderOutput { float4 Position : POSITION0; float4 ScreenPosition : TEXCOORD0; };  VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; //processing geometry coordinates float4 worldPosition = mul(float4(input.Position,1), World); float4 viewPosition = mul(worldPosition,  View); output.Position = mul(viewPosition, Projection); output.ScreenPosition = output.Position; return output; }  float2 halfPixel; float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { //obtain screen position input.ScreenPosition.xy /= input.ScreenPosition.w; //obtain textureCoordinates corresponding to the current pixel //the screen coordinates are in [-1,1]*[1,-1] //the texture coordinates need to be in [0,1]*[0,1] float2 texCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); //allign texels to pixels texCoord -=halfPixel;  //get normal data from the normalMap float4 normalData = tex2D(normalSampler,texCoord); //tranform normal back into [-1,1] range float3 normal = 2.0f * - 1.0f; //get specular power float specularPower = normalData.a * 255; //get specular intensity from the colorMap float specularIntensity = tex2D(colorSampler, texCoord).a;  //read depth float depthVal = tex2D(depthSampler,texCoord).r;  //compute screen-space position float4 position; position.xy = input.ScreenPosition.xy; position.z = depthVal; position.w = 1.0f; //transform to world space position = mul(position, InvertViewProjection); position /= position.w;  //surface-to-light vector float3 lightVector = lightPosition - position;  //compute attenuation based on distance - linear attenuation float attenuation = saturate(1.0f - length(lightVector)/lightRadius);   //normalize light vector lightVector = normalize(lightVector);   float NdL = max(0, dot(normal, lightVector)); float3 diffuseLight = NdL * Color.rgb;  //reflection vector float3 reflectionVector = normalize(reflect(-lightVector, normal)); //camera-to-surface vector float3 directionToCamera = normalize(cameraPosition - position); //compute specular light float specularLight = specularIntensity * pow(saturate(dot(reflectionVector, directionToCamera)), specularPower);  //take into account attenuation and lightIntensity. return attenuation * lightIntensity * float4(diffuseLight.rgb, specularLight); }  technique Technique1 { pass Pass1 {     VertexShader = compile vs_2_0 VertexShaderFunction();     PixelShader = compile ps_2_0 PixelShaderFunction(); } } 

Can an end-user modify conditional rendering in React?

On my Single Page App I am using MSAL.js to authenticate users and to also extract the groups they belong to by using Microsoft Graph endpoints. I save to a variable the Specific groups the user belongs to. According to the content of that variable, a different Home Page will be rendered. The code looks like this:

if ( == 'AppAdmin') {     return (         <div className='h1'> Admin Dashboard</div>     ); } else if ( == 'AppManager') {     return (         <div className='h1'> App Manager Dashboard</div>     ); } else {     return (         <div className='h1'> User Dashboard</div>     ); } contains the group the user belongs to in Active Directory.

Will an end user not belonging to the AppAdmin or AppManager groups be able to modify in their web browser the variable value to fool the browser into rendering admin or manager content?

Rendering multiple Instagram feeds as one within SharePoint Online

Hoping some of my fellow SharePoint guru’s can point me in the right direction.

We have multiple Brands, each brand has their own Instagram account and posts their own content. We would like to render all of the Brands into one feed on our company intranet. Thus keeping our internal users up to date without having to navigate to each.

We currently have a solution that requires you to pick from a drop down and only renders one at a time.

Need to display:

  • Brand
  • Text of the post
  • Picture/image

Ubuntu Mate (rendering) terribly slow on Ubuntu 18.04

I’m running Ubuntu Mate 18.04.3 on a MacBook Pro (Retina) from Mid 2014, but the speed on which some applications run makes it impossible to work with. Especially chrome, chromium, firefox, slack (uses chromium), there’s a lot of lag when typing or scrolling and it takes a very long time to load a website that uses a lot of javascript for visualizations, like a dashboard.

Booting up and running shell and other programs is quick as at should be and the other os, OSX, runs just fine.

I’d tried changing the hardware acceleration in chrome and firefox and installed mbpfan but nothing changed.

Any help would be greatly appreciated, I’d have to move back to use OSX as my default OS if there’s no way to fix this. And that’s something I’d like to avoid.

Below is more info on my system and some relevant output from several commands. I ran htop as well, and surely cpu usage did increase when running one of the applications, but it did not use a abnormal amount of cpu / ram.

Ubuntu is installed on the internal SSD.

Specifications of machine

inxi -SMIG -! 31

System:    Kernel: 5.0.0-31-generic x86_64 bits: 64 Desktop: MATE 1.20.1  Distro: Ubuntu 18.04.3 LTS Machine:   Device: laptop System: Apple product: MacBookPro11 2 v: 1.0 serial: N/A            Mobo: Apple model: Mac-3CBD00234E554E41 v: MacBookPro11 2 serial: N/A            UEFI: Apple v: MBP112.88Z.0146.B00.1804111138 date: 04/11/2018 Graphics:  Card: Intel Crystal Well Integrated Graphics Controller            Display Server: x11 (X.Org 1.20.4 ) drivers: modesetting (unloaded: fbdev,vesa)            Resolution: 2880x1800@59.99hz, 2560x1440@59.95hz            OpenGL: renderer: Mesa DRI Intel Haswell Mobile version: 4.5 Mesa 19.0.8 Info:      Processes: 261 Uptime: 37 min Memory: 1660.9/15946.0MB Client: Shell (zsh) inxi: 2.3.56 

cat /etc/os-release

NAME="Ubuntu" VERSION="18.04.3 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.3 LTS" VERSION_ID="18.04" 

cat /proc/version

 Linux version 5.0.0-31-generic (buildd@lgw01-amd64-046) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #33~18.04.1-Ubuntu SMP Tue Oct 1 10:20:39 UTC 2019 

glxinfo | grep render

direct rendering: Yes     GLX_MESA_multithread_makecurrent, GLX_MESA_query_renderer,      GLX_INTEL_swap_event, GLX_MESA_copy_sub_buffer, GLX_MESA_query_renderer,  Extended renderer info (GLX_MESA_query_renderer): OpenGL renderer string: Mesa DRI Intel(R) Haswell Mobile      GL_ARB_compute_shader, GL_ARB_conditional_render_inverted,      GL_MESA_texture_signed_rgba, GL_NV_conditional_render, GL_NV_depth_clamp,      GL_ARB_compute_shader, GL_ARB_conditional_render_inverted,      GL_NV_conditional_render, GL_NV_depth_clamp, GL_NV_fog_distance,      GL_EXT_render_snorm, GL_EXT_robustness, GL_EXT_sRGB_write_control,      GL_OES_fbo_render_mipmap, GL_OES_geometry_point_size, 

cpufreq-info copied for 1 core

cpufrequtils 008: cpufreq-info (C) Dominik Brodowski 2004-2009 Report errors and bugs to, please. analyzing CPU 0:   driver: intel_pstate   CPUs which run at the same hardware frequency: 0   CPUs which need to have their frequency coordinated by software: 0   maximum transition latency: 4294.55 ms.   hardware limits: 800 MHz - 3.40 GHz   available cpufreq governors: performance, powersave   current policy: frequency should be within 800 MHz and 3.40 GHz.                   The governor "powersave" may decide which speed to use                   within this range.   current CPU frequency is 3.27 GHz. 

cat /sys/class/thermal/thermal_zone*/temp

30900 76000 

cat /proc/cpuinfo copied for 1 core

processor   : 0 vendor_id   : GenuineIntel cpu family  : 6 model       : 70 model name  : Intel(R) Core(TM) i7-4770HQ CPU @ 2.20GHz stepping    : 1 microcode   : 0x1b cpu MHz     : 3236.768 cache size  : 6144 KB physical id : 0 siblings    : 8 core id     : 0 cpu cores   : 4 apicid      : 0 initial apicid  : 0 fpu     : yes fpu_exception   : yes cpuid level : 13 wp      : yes flags       : {REMOVED} bugs        : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs bogomips    : 4389.96 clflush size    : 64 cache_alignment : 64 address sizes   : 39 bits physical, 48 bits virtual power management: