Como puedo hacer una agrupación en sql serve.?

tengo la siguiente consulta. (SQL SERVE)

SELECT, b.Respuesta FROM users a INNER JOIN respuestas b ON = 

Eso me da este resultado

introducir la descripción de la imagen aquí

Quiero agrupar por el id de usuario pero cuando lo hago pierdo las respuestas

introducir la descripción de la imagen aquí

Es decir para mi usuario 71 debería de contener todas sus respuestas. introducir la descripción de la imagen aquí

Se que puedo hacer un join por cada respuesta para tratarla como dato independiente. Pero no me gusta tener esta practica porque los querys pierden eficiencia.

Si tienen alguna idea o saben como hacer estas agrupaciones sin perder datos se los agradeceré.

Serve two different websites, one under root and another under /news for nginx

I have this set up under Apache but can’t get it working under nginx. I have two websites one that covers everything, another under /news/. They run the same framework – Silverstripe.

Here is my nginx conf:

server {       include mime.types;       default_type  application/octet-stream;       client_max_body_size 0; # Manage this in php.ini       listen 80;       listen 443 ssl;       root /var/www/html/example/webroot;       server_name;        ssl on;        ssl_certificate /etc/letsencrypt/live/example/cert.pem;       ssl_certificate_key /etc/letsencrypt/live/example/privkey.pem;        access_log /var/log/nginx/example/access.log main;       error_log /var/log/nginx/example/error.log;        # Defend against SS-2015-013 --       if ($  http_x_forwarded_host) {         return 400;       }        location ^~ /news/ {           root /var/www/html/example2/webroot;           try_files $  uri /framework/main.php?url=$  uri&$  query_string;            location ~ /framework/.*(main|rpc|tiny_mce_gzip)\.php$   {           fastcgi_buffer_size 32k;           fastcgi_busy_buffers_size 64k;           fastcgi_buffers 4 32k;           fastcgi_keep_conn on;           fastcgi_pass unix:/run/php-fpm/php-fpm.sock;           fastcgi_index  index.php;           fastcgi_param  SCRIPT_FILENAME $  document_root$  fastcgi_script_name;           include        fastcgi_params;         }        }        location / {         try_files $  uri /framework/main.php?url=$  uri&$  query_string;       }        error_page 404 /assets/error-404.html;       error_page 500 /assets/error-500.html;        location ^~ /assets/ {         sendfile on;         try_files $  uri =404;       }        location ~ /framework/.*(main|rpc|tiny_mce_gzip)\.php$   {         fastcgi_buffer_size 32k;         fastcgi_busy_buffers_size 64k;         fastcgi_buffers 4 32k;         fastcgi_keep_conn on;         fastcgi_pass unix:/run/php-fpm/php-fpm.sock;         fastcgi_index  index.php;         fastcgi_param  SCRIPT_FILENAME $  document_root$  fastcgi_script_name;         include        fastcgi_params;       }        # Denials       location ~ /\.. {         deny all;       }       location ~ \.ss$   {         satisfy any;         allow;         deny all;       }       location ~ \.ya?ml$   {         deny all;       }       location ~* README.*$   {         deny all;       }       location ^~ /vendor/ {         deny all;       }       location ~* /silverstripe-cache/ {         deny all;       }       location ~* composer\.(json|lock)$   {         deny all;       }       location ~* /(cms|framework)/silverstripe_version$   {         deny all;       } } 

I’ve tried a few other things similar to this but it always ends up the same result, the server returning a Moved Permanently to the same URL.

What are the best practices of designing database to serve multiple apps within one ecosystem?

To clarify the question here is the scenario. There is a web app which implements RESTfull API with JWT authorization. It runs on PHP/mySQL at the back and Angular2 in front.

Then comes another app which would need to use the same user credentials as the first one but it has its own context. The two apps will integrate with each other over time more and more, however the extent to which they will integrate with each other is still unclear, except for authorization and subscription payments. Think Atlassian ecosystem as a broad example of integration and account administration.

How would you design the database and APIs around it in such a case?

Se fizer ionic serve o meu site/app funciona correctamente como devia, colocando o resultado do www no public_html a app deixa de funcionar

sou novo na programação e ando a aprender a fazer sites/apps em IONIC 4. Até agora só fiz uma e ando encravado na parte de colocar on-line.

No desenvolvimento sempre que no terminal faço ionic serve a app corre normalmente fazendo os pedidos a uma api que funciona normalmente.

Mas quando faço ionic build --prod --release , copio o conteudo da pasta www e para um public_html do site de destino.

E ai deixa de funcionar e isto é o que aparece.

inserir a descrição da imagem aqui

NOTA: se entrar em ele encaminha para a pagina de tarefas a mesma pois ela esta definida por default como homepage, qualquer botão que se pressione ele exibe o erro que esta na imagem.

Será que me esta a escapar algo? quando testo no vscode ele funciona.

Make `lighttpd` 1.4 reverse proxy serve an application from different path

I am trying to configure an application (Python Flask) to run behind a lighttpd reverse proxy. I am using lighttpd v1.4.53.

The part that fails has its root cause in the fact that the application is served from a subpath it is not aware of. So if a user accesses 

the request should be proxied to the application without the myapp path without the app knowing/handling that.

The following configuration makes the basics work (i.e. some HTML is returned, but for example without the CSS styling):

$  HTTP["url"] =~ "^/myapp/" {     proxy.server = (         "" => ( (             "host" => "backend",             "port" => 5000         ) )     )     proxy.header = (         "map-urlpath" => (             "/myapp/" => "/",             "/myapp"  => "",    # required? correct?          )      ) } 

The problem is that the app is generating (relative) links without the myapp part (of course). So for example in the HTML the link to the sylesheet is

<link rel=stylesheet type=text/css href="/static/styles.css"> 

which does not work. It should be (with the myapp part)

<link rel=stylesheet type=text/css href="/myapp/static/styles.css"> 

From the documentation of ModProxy I was hoping that proxy.header/map-urlpath would do the trick. But it apparently does not.

What would a correct lighttpd config look like? Or is this something that can (must?) be fixed in the Flask configuration? Note that I only have very limited (not to say “no”) influence on the Flask app.

I think “full blown” Web Application Firewalls (WAFs) are doing “rewrites” like this all the time, no?

But then, I found the following 4+ years old comment in another SF question: lighttpd reverse proxy rewrite which makes me fear that what I want might (still?) not be possible?

php artisan serve no se detiene

Al ejecutar ‘php artisan serve’ para ejecutar mi proyecto todo va bien.

El problema viene cuando intento detener el servicio (Ctrl + C), en la consola veo que se detiene pero al seguir navegando la URL la aplicación sigue viva.

La única manera de detener el servicio es listarlo con tetstat y después matarlo con taskkill.

¿Me puede alguien ayudar a saber cómo lograr que la aplicación se muera cuando termino la ejecución?


ionic serve

ng run app:serve –host= –port=8100 [ng] The run command requires to be run in an Angular project, but a project definition could not be found.

[ERROR] ng has unexpectedly closed (exit code 1).

    The Ionic CLI will exit. Please check any output above for error      details. 

Can we serve static files from Tomcat FASTER than NginX/Apache?

The following data was from testing on my own windows machine(SSD) on localhost. When I download static content from Nginx, I get upto 120-140 MBps(I am sure it can optimized further. Nginx actually claims it can reach a throughput of upto 0.98 Gbps). And this was being done, with almost negligible increase in cpu/memory consumption increase. I have tried to do the same in tomcat, and the max speed when downloading files is upto 25 Mbps. Also, this consumes 15-25% of cpu usage as per task manager. My machine has a SSD, so no, the file reading isn’t taking time or increasing cpu usage.

Can the file serving part be improved? I.E serving faster download speeds with lesser cpu consumption.

<Connector port="29022" protocol="org.apache.coyote.http11.Http11Nio2Protocol" useSendfile="true" connectionTimeout="300000"/> 

I am using Nio2 connector. Should I tweak any more of the connector settings, or use Nio/apr connector?

All file download requests are delegated to a threadpool using an asynchronous servlet call : request.setAttribute(“org.apache.catalina.ASYNC_SUPPORTED”, true);

The threads from the thread pool(thread pool size is 2) will serve each of the requests in a round robin fashion in a loop. In each loop iteration, the following code is executed to send about 8 KB(8192 Bytes).

if(bytesSent < toBeServedFileSize){     if(!buffer.hasRemaining()){         buffer.clear();,bytesSent);         buffer.flip();     }     int bytesToWrite = Math.min(buffer.remaining(), streamPacketSize);     byte[] byteAr = new byte[bytesToWrite];     for (int i = 0 ; i < bytesToWrite ; i++){byteAr[i] = buffer.get();}     //I'm guessing below is the most cpu intensive line     os.write(byteAr);     //I am guessing above is the most cpu intensive line     bytesSent += bytesToWrite; } 

The whole purpose of this exercise is to both authenticate and authorise the file download requests. I think serving static content from apache/NginX don’t have good solutions to address this requirement. Suggestions and alternate solutions are welcome.

Configure Nginx to serve WordPress or Angular app depending on route on the same EC2 under same domain name

My goal is to serve a WordPress site for my static pages for routes such as /about-us and /contact and then serve my bundled Angular application for the /login, /signup, and the user auth guarded routes.

I’ve configured nginx to serve my WordPress site, however, when I try to access the /login page, where the user will be served the Angular app, I’m not able to correctly re-write the web root folder and the server response is always a default nginx 404.

How do I properly overwrite the web root folder to point to the index.html of the Angular code base? I know I’ve misused the root directive in the last location block below.

  • WordPress index.php location = /var/www/wordpress
  • Angular index.html location = /var/www/dist/my-app

My nginx configuration:

server {   listen 80 default_server;    listen [::]:80 default_server;     root /var/www/wordpress;    index index.html index.php;    server_name;    if ($  http_x_forwarded_proto = 'http'){     return 301 https://$  host$  request_uri;   }    ### STATIC PAGE ROUTES ###    location = / {     # WordPress site log files     error_log /var/log/nginx/wordpress-error.log;     access_log /var/log/nginx/wordpress-access.log;     try_files $  uri $  uri/ /index.php;   }    # following by a bunch of other WordPress routes ...    location ~ \.php$   {     fastcgi_split_path_info ^(.+\.php)(/.+)$  ;     include fastcgi_params;     include snippets/fastcgi-php.conf;     fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;   }    ### PORTAL ROUTES ###    location = /login/ {     # Portal error logs     error_log /var/log/nginx/portal-error.log;     access_log /var/log/nginx/portal-access.log;     root /var/www/dist/my-app;     try_files $  uri $  uri/ /index.html    }    # would be other Angular routes ...  } 

The associated error with this configuration is:

... 2019/05/07 20:42:14 [error] 3311#3311: *1280 open() "/var/www/wordpress/index.html" failed (2: No such file or directory), ...