Is it safe to expose port 22 on a database VM?

I have seen many answers to this question in different scenarios but I am still unsure of the actual answer.

I have a VM in the cloud (Azure), which will be hosting my production database. Is it safe to have port 22 open for my SSH connection? it also has a public IP address, is this safe too?

This is my first time having to concern myself with these types of questions so apologies for the lack of understanding.

GraphQL – Should I expose link tables?

I am experimenting with converting an API to GraphQL, and I have a database that has many-to-many relationships that are stored via link tables; something like this:

CREATE TABLE accounts (   id int,   last_name varchar,   first_name varchar );  CREATE TABLE files (   id int,   content varchar,   name varchar );  CREATE TABLE account_file_links (   id int,   account_id varchar,   file_id varchar,   can_edit tinyint,   FOREIGN KEY (account_id) REFERENCES accounts(id)   FOREIGN KEY (file_id) REFERENCES files(id) ); 

I am wondering if I should be exposing these links as they’re own types in the GraphQL schema or not. When I think of my database as a graph, the nodes are the accounts and the reports, while the edges would be the account_file_links. There are attributes present on the link (in this example, the can_edit property) that need to be presented to the API consumers.

Expose encrypted serial ID in Elixir

I’m working on a Phoenix/Absinthe application and I thought to expose encrypted sequential IDs instead of UUIDs since these are a bit shorter. Encryption on Elixir/Erlang seems very hard, so I think I’ll use UUIDs eventually.

Anyway I’d like to know how bad, from security perspective, is the solution I came up with:

defmodule MyAppWeb.GraphQL.Types.EncId do   use Absinthe.Schema.Notation    defp secret_key(len \ 32) do     Application.get_env(:my_app, MyAppWeb.Endpoint)     |> Keyword.get(:secret_key_base, "")     |> String.slice(0, len)   end    defp pad_bytes(binary, block \ 16) do     padding_bits =       case rem(byte_size(binary), block) do         0 -> 0         r -> (block - r) * 8       end      <<0::size(padding_bits)>> <> binary   end    defp unpad_bytes(<<0, tail::bitstring>>), do: unpad_bytes(tail)   defp unpad_bytes(binary), do: binary    defp encrypt(raw_binary) do     padded_binary = pad_bytes(raw_binary)      :crypto.crypto_one_time(:aes_256_ecb, secret_key(), padded_binary, true)   end    defp decrypt(raw_enc) do     :crypto.crypto_one_time(:aes_256_ecb, secret_key(), raw_enc, false)     |> unpad_bytes()     |> :erlang.binary_to_term()   end    def serialize(id) do     id     |> :erlang.term_to_binary()     |> encrypt()     |> Base.url_encode64(padding: false)   end    def parse(%{value: enc_id}) do     try do       {:ok, raw_enc} = Base.url_decode64(enc_id, padding: false)       {:ok, decrypt(raw_enc)}     rescue       _ -> :error     end   end    scalar :enc_id, name: "EncId" do     serialize(&__MODULE__.serialize/1)      parse(&__MODULE__.parse/1)   end end 

How do you expose the field that caused the error?


Context: field errors

Given an input in the form:

{     "list": [         { "username": null },         { "username": "test" },         { "username": "" }     ] } 

Some APIs expose errors in the form of:

"errors": [     { "path": ["input", "list", 0, "username"], "message": "Username cannot be empty" },     { "path": ["input", "list", 2, "username"], "message": "Username cannot be empty" } ] 

which basically identify uniquely the field it corresponds too. There are also other errors that don’t correspond to specific input fields, and those don’t have a corresponding path.

This is done, from what I understand, to be able to easily map an error to the corresponding form field in the UI.

Problem

Unless you are dealing with all the validation logic where you also receive the input, it’s not clear to me how one would keep track of what original field path is.

For example, given something like:

public Task<T> Execute(SomeInput input) {     var validationError = await Validate(input); } 

inside Validate you have already lost the name of the original input path segment.

Moreover, often times inside Validate you would need “sub-validators” that also lose the context of the SomeInput parameter.

For example in the case of a “unique username” validator you would probably only pass the username string. But the exception/error is returned inside that validator, which knows nothing about the original context.

Question

What are the industry patterns to keep track of the original request context down the call stack? Are there any other ways, except with path-like fields, to map errors to specific input?

Why use ingress to expose a service when you have a LoadBalancer?

I have been researching how Kubernetes exposes services to the outside world and have found lots of articles explaining the differences between using NodePort, LoadBalancer, and Ingress.

However not one seems to explain a very fundamental question: What are the use cases where you would actually want to use an ingress controller? The articles describe the three as equivalent methods of exposing services, but they’re not. NP and LB are service types. An Ingress controller is a service that you can use to load balance traffic to other services, but it’s still a just a service, which means you need to configure a load balancer (Nodeport is discouraged so I’m ignoring it for this discussion) to expose it.

So we now have an external load balancer, pointing to an internal load balancer service, which finally points to the actual services we want to expose.

We have duplicated functionality for worse than no benefit because you’re now paying for an external load balancer AND you’re cluster is wasting cycles running redundant load balancing services.

The external load balancers provided by the various cloud provider already include functionality like path routing, etc, so why even bother with ingress at all?

The only situations I can think of where it makes sense to use one are:

  • trying to load balance services against intra-cluster traffic
  • you want to route external traffic being directed to multiple hostnames (as opposed to paths) but don’t want to have a separate external LB for every hostname. (eg: Using an AWS NLB to accept all traffic, which forwards that to an ingress controller to sort out)
  • you’re trying to simulate a complex multi-LB environment and don’t have the means to run multiple actual load balancers (eg: using minikube on your laptop)

Do I have the right of it, or am I missing something?

How to show application windows only for current Space in Exposé, and not aggregated from all Spaces

I just started using Spaces in 10.14 (I know I’m a little late to) and was excited at first until I encountered this scenario:

When I am in a Space and do Exposé to “Show me ALL Application Windows”, Exposé shows me ALL the windows opened for that App across ALL Spaces (and not just from the current Space)

However, the behavior in the next scenario is perfect: When I am in a Space and do Exposé to “Show me ALL Windows for ALL running applications”, Exposé dutifully shows me only the relevant windows for “ALL” running applications in the “Current” space only. – GREAT!

The problem with showing me “All Windows for Current Application Across ALL Spaces” easily overwhelms me, and to me it cancels out the usefulness of Spaces to organize productivity & workflow.

I created my Desktops to keep things separate.

If anyone has has any insight please share- would love to use Spaces!


Info on my research

I found this article and these 2 Stack Articles:

Customize Mission Control To Show Only Windows From Current Desktop Space

Exposé in 10.7: When exposing “Application Windows”, how do you show only the windows in the current space?

How to modify app expose to show only open windows of current application on current desktop?

I thought I’d try to add the wvous-show-windows-in-other-spaces back in to my plist

edited ~/Library/Preferences/com.apple.dock.plist -> added xml value

    <key>wvous-show-windows-in-other-spaces</key>     <false/> 

killall Dock

But did not work.

What are the required modifications in web.config in sharepoint site to expose URL to ajax call

I’m new to working with sharepoint. When i tried to call share point API url from ajax call in JavaScript. But, it’s continuously failing and throws the error. I can say, that is CORS error. So, i’ve made changes in web.config like below.

<add name="Access-Control-Allow-Origin" value="*" /> <add name="Access-Control-Allow-Headers" value="Content-Type,Accept,X- FORMS_BASED_AUTH_ACCEPTED,crossDomain,credentials " /> <add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" /> <add name="Access-Control-Allow-Credentials" value="true" /> 

and ajax call as below

 $  .ajax({             url: url,             headers: {                 Accept: "application/json;odata=verbose"             },             xhrFields: { withCredentials: true },             async: false,             success: function (data) {                 var items = data.d;                 console.log("Login Name: " + items.LoginName);                 console.log("Email: " + items.Email);                 console.log("ID: " + items.Id);                 console.log("Title: " + items.Title);             },             error: function (jqxr, errorCode, errorThrown) {                 console.log(jqxr.responseText);             }         }); 

Please help me regarding this. I’m trying since 2 days.

Print-Button on expose

I have been thinking and thinking and thinking about this topic. I’ve researched multiple sources and theories. None of these was near to what we do, probably because of the pretty unique german regulations for the real-estate industry.

Our target group is pretty versatile but also includes a good amount of 60+ people. We recently switched from classical PDF exposes to a digital, automatic, version. We designed the print media query, as granting access to a static version is mandatory for legal reasons. That didn’t help a lot because the older target groups are still asking for a printed version, so we are planning on putting a print button for them.

The question I am asking now is, where to put it? Maybe some of you have some ideas or tipps. Preferably with a explanation of “why”.

enter image description here

This is how the landing-view looks like. My idea would be, to put the print button below the object ID or below the price. What do you think or what position would you recommend?

Hint: the top right is reserved for a new “status” function.

How to expose filter for entity reference as dropdown in the Search API view?

In my Search API view, I’d like an Exposed filter > Entity reference to be a select box, not a text field.

Details

I have a view which I created like:

  • Views > Add new view > Show: default node index

So it uses the Search API index as a basis.

In the Search API I have ticked to index field_entity which is an Entity reference > Content_type: entity (confusing naming, I know).

I want the user to go to the views page & be able to filter the view results by field_entity (which is an exposed filter/entity reference).

The issue is the view shows the exposed filter as a text field. I have edited field_entity & checked the box Render Views filters as select list – this works fine for a normal view but not for a Search API view.

Question

Can I somehow convert this text field to a select box?


To clarify, I cannot use custom or contributed modules on this project, so it needs to be done without using hooks, or other modules, etc.

Drupal 7, Search API Views