If there is any research on Goal Based Programming (GBP)?

The more I think about programming and optimization, the more I think “why not just specify a goal and have the program figure out the optimal solution to it”.

I am familiar with basic “optimization problems” such as finding the best fit line to a curve, or gradient descent sorts of things. What I’m talking about is way more complicated than that.

What I’m imagining is to say something like “An HTTPS server exists”, and for the system to figure out how to build one. Obviously given just that info, it’s not enough. It would require human-level training in programming and understanding concepts and everything.

But my question is, what could you do to build a system to support such a “goal statement”? What would the key parts be?

It seems at first, the simplest goal is “Action x is performed”. This is required to change the current world into the desired (goal) world. For example, "Add" is performed on 1 and 2 is a goal stating that the “add” function is applied to the two arguments. It seems that from this foundation, you can build up higher and higher levels of abstraction to the point where you could then say “An HTTPS server exists”. But this HTTPS server is a structure, not an action. So you need some way to have some intermediate goals that transcribe goals into (not actions, but) structures. Perhaps, The result of x operation exists is a simple transformation between the two.

But then I’m stuck haha. What do the goals look like in the intermediate realm? Has anyone done any research into this area? Searching doesn’t yield much, though it brings up a book Goal programming and extensions which I might have to purchase.

Dynamic programming problem

our uni is closed because of the COVID-19 and I’m trying to homelearn dynamic programming.

In our algorithms book, there is the following problem: (an example problem for dynamic programming)

A driver has 3 cars. He wants to use all of the 3 cars, but want to use the least gas. Each car has a different engine, so it will consume different amounts of gas to get to each destination. What is the least amount of gas we can consume while still using each car? After he switches cars, he can’t go back to the previous car.

Input description:

First is a number $ n$ , which is the number of destinations that he wants to reach. Then there are 3 lines, each is the gas consumed while trying to reach each destination (line 1 is car 1, line 2 is car 2, etc..)

Example input:

7

2 4 1 5 1 1 2

3 3 2 5 3 2 2

1 1 5 4 3 3 3

Example output:

12 (third car (2 destinations), first car (4 destinations), second car (1 destination).)

However, I can’t figure out how to start, as I’m still learning DP. Could you please help me or give me any hints?

I think it’s a great good example problem, because other problems are very similar so if I master this one, I should be able to solve other problems.

Thanks!

How to approach weighted job/interval scheduling problem with 2 machines (dynamic programming)

Given N jobs where every job is represented by the following: Start Time, Finish Time and Value Associated (>= 0) and two machines that can do the jobs,

The goal is to find the maximum value subset of the jobs such that no two jobs in the subset overlap.

How should I approach this? I could only think of using the solution for a single machine and then doing it again for the second machine…

What is the earliest use of the “this” keyword in any programming language?

I understand the this (or self or Me) is used to refer to the current object, and that it is a feature of object-oriented programming languages. The earliest language I could find which has such a concept was Smalltalk, which uses self but was wondering where and when (which programming language) the concept was first implemented?

Question about importance of dynamic programming

I am really struggling with dynamic programming. Everytime i see an explanation for an algorithm it seems very logic but when i try to come up with one myself i do not know where to start.

The steps i follow are usually:

  • try to find trivial cases
  • try to find optimal subproblems
  • try to build the recursion tree and see what overlaps

So my my questions are:

  1. How important is mastering dynamic programming in a computer science career?
  2. Any good resources for learning dynamic programming?(I currently use “Introduction to algorithms 3rd edition”)

Good book on the history of programming languages?

I’d like to read a book on the history of programming languages, that places their development into the context of their times. What was the context in which concepts like structured programming, object oriented programming etc were developed? I was introduced to programming when OOP was well developed and hence I don’t really know what it was like for OOP to be developed for the first time. Similarly for structured programming, procedural etc.

Is there a good book that starts with the first programmers who worked directly in computer code, and goes through the different developments, showing how the developments were non-trivial at the time (even though from our current perspective they might seem natural and obvious)?

Time complexity of linear programming

I have a linear program with $ n$ variables, $ m$ constraints and $ O(nm)$ bit total length (the constraint matrix contains only zeros and ones). The time complexity for solving the linear program is known to be polynomial $ O(n^a m^b)$ for some integers $ a$ and $ b$ . What is the best known pair $ a, b$ where the value of $ a$ is the minimal possible?

Competitive Programming

Consider che function in the image. Choose a value for ??? in the call f(???) so that the array v will be written outside of its boundaries. Assume that sizeof(char)==1, i.e., 8 bits wide. The correct answer is 250. But I donìt understand why? Can someone explain me this piece of code please?

int *f(int s){ unsigned char i= (unsigned char)x; int* v =(int*)malloc(100*sizeof(int)); if(v!= NULL && (char)i < 100) v[i]=x; return v; 

}

Runtime of weighted interval scheduling dynamic programming algorithm

Consider this implementation of a dynamic programming algorithm for weighted interval scheduling:

M-Compute-Opt(j)      If j=0 then          Return 0     Else if M[j] is not empty then          Return M[j]     Else         Define M[j] = max(v_j + M-Compute-Opt(p(j)), M-Compute-Opt(j − 1))          Return M[j]     Endif 

Here we have a set of requests $ \{1, 2, \ldots , j\}$ . We’re assuming they’re ordered by finishing time in nondecreasing order. I.e., $ j$ finishes last, $ j-1$ second last, etc. $ v_j$ is the weight assigned to interval $ j$ . Also, $ p(j)$ is the interval to the left of $ j$ that ends as close to the beginning of $ j$ as possible without overlapping. We’re assuming these were also computed beforehand.

The textbook I’m looking at says the runtime is $ O(n)$ because a single call to M-Compute-Opt is $ O(1)$ and we call it twice for every empty entry in array M. I almost buy it except it seems to me that we could end up calling it more often for some $ i \in \{1, \ldots , j\}$ if the function $ p$ maps lots of elements to that $ i$ . For example, if there is some interval $ i$ where right after it ends a ton of intervals start, $ p$ would map lots of intervals to it. And of course M-Compute-Opt wouldn’t be called from within those instances since the value would be stored the first time, but it seems they would run nonetheless.

I guess in general I understand the argument given in the book, but I was wondering if there is a good one way to understand the linear time from a more intuitive standpoint. I’m not used to calculating runtimes and I feel like if I saw a different problem like this one I wouldn’t come up with the “trick” used.