How do I edot this texconv “IMGtoBC3.bat” file so it “READS FROM & PRESERVES the directory structure when writing the “resulting” dds files


Like the title says:

How do I edot this texconv "IMGtoBC3.bat" file so it "READS FROM & PRESERVES the directory structure when writing the "resulting" dds files?

I have a large folder/sub-folders with many tiff’s that "have the same file name" divided up into "sets".

These dds files are for custom textures in a unity game, file naming is a forced event, but "sorted/selected" by folder.

I have about 3000 file sets ready to convert and I’m not doing them 1 by 1 in Nvidia tool (I hope) and AMD tool and the game have a MAJOR conflict (its as a no go)

BONUS points to anyone that could show me this "bat" file using unity’s new tree of "CRUNCH" cmd line dds compressor/encoder. Suggestions of other tools: I have tested over 50 apps, for my needs its: Nvidia, texconv and then crunch in level of quality/support

HERE IS THE CODE:

ECHO OFF Setlocal EnableDelayedExpansion  ::Variables SET @FORMAT=BC3_UNORM SET @InputFolder=%~dp0Input_IMG_TO_BC3\ SET @OutputFolder=%~dp0Output_DXT5_BC3\ SET @TEXCONVEXE=%~dp0texconv.exe SET @TEXCONVEXE02=%~dp0texconv.exe :: Check for texconv.exe IF EXIST "%@TEXCONVEXE%" SET @TEXCONVEXE=1 IF "%@TEXCONVEXE%"=="1" GOTO EXESTATE_1  :EXESTATE_0 TITLE - ERROR! texconv.exe not found!!! COLOR 04 ECHO: && ECHO: && ECHO: ECHO                 === ERROR! texconv.exe not found!=== ECHO: ECHO     Install Path: "%~dp0texconv.exe" ECHO: ECHO    The script needs texconv.exe in order to work properly. ECHO: ECHO    Please make sure texconv.exe is in: "%~dp0"  ECHO: && ECHO:  GOTO CONT01   :EXESTATE_1 TITLE - Texconv.exe found!!! COLOR 0A  ECHO: && ECHO: && ECHO: ECHO                 [ texconv.exe Is Installed! ] ECHO: ECHO     "%~dp0texconv.exe" ECHO: GOTO CONT00 :CONT01 ECHO: && ECHO: ECHO        Please copy/move the missing texconv.exe executable to where the script needs it to be and refresh this window. ECHO: :CONT00 IF "%@TEXCONVEXE%"=="1" GOTO START ECHO: && ECHO: && ECHO     [Press any key to refresh the window] && PAUSE>NUL GOTO SetTexConvPath  :START  :: Customize CMD Window TITLE HumanStuff TexConv Batch Directory Script v1.0.2 PROMPT $  G COLOR 04 CLS  :: Make The Folders IF NOT EXIST "%@InputFolder%" MKDIR "%@InputFolder%" IF NOT EXIST "%@OutputFolder%" MKDIR "%@OutputFolder%"  ::Run TexConv.exe ::-srgb was added because PNG images were getting high contrast colors ::Sorry about the messy code but this was harder to do than it sounds  FOR /R "%@InputFolder%" %%i IN (*.*) DO ( set word=%@OutputFolder% set str=%%~dpi CALL :REPLACESTRING SET @IFOL=!@OSTRING! CALL :MKFOL SET @ISTRING=%%i CALL :TexConv01 )  PAUSE GOTO SCRIPTEND  :MKFOL IF NOT EXIST "%@IFOL%" (     MKDIR "%@IFOL%" ) GOTO SCRIPTEND  :TexConv01 IF NOT "%@LOGO%"=="" SET @LOGO=-nologo "%@TEXCONVEXE02%" %@LOGO% -srgb -nogpu -pow2 -vflip -if triangle -bc u -f %@FORMAT% "%@ISTRING%" -o "%@OSTRING%" -y ECHO: SET @LOGO=  GOTO SCRIPTEND  :REPLACESTRING call set str=%%str:%@InputFolder%=%word%%% set @OSTRING=!str:~0,-1! GOTO SCRIPTEND  :SCRIPTEND ``` 

Big-O of iterating through nested structure

While trying to understand complexity I run into an example of going through records organized in following way:

data = [   {"name": "category1", "files": [{"name": "file1"}, {"name": "file2"}],   {"name": "category2", "files": [{"name": "file3"}] ] 

The task requires to go through all file records which is straight forward:

for category in data:   for file in category["files"]:     pass 

It seems like complexity of this algorithm is O(n * m), where n is length of data and m is max length of files array in any of data records. But is O(n * m) only correct answer?

Because even there are two for-loops it still looks like iterating over a global array of file records organized in nested way. Is it legit to compare with iteration over different structure like that:

data = [   ('category1', 'file1'),   ('category1', 'file2'),   ('category2', 'file3'), ] for category, file in data:   pass 

…where complexity is obviously O(n), and n is a total number of records?

Visualizing a directory structure as a tree map of rectangles

There’s this nice tool called WinDirStat which lets you scan a directory and view files in a rectangular tree map. It looks like this:

windirstat

The size of each block is related to the file size, and blocks are grouped by directory and coloured distinctly according to the top level directory. I’d like to create a map like this in Mathematica. First I get some file names in the tree of my Mathematica installation and calculate their sizes:

fassoc = FileSystemMap[FileSize, File[$  InstallationDirectory], Infinity, IncludeDirectories -> False]; 

Level Infinity ensures it traverses the whole tree. I could also add 1 to ensure the association is flattened, but I want the nesting so I can assign total sizes per directory.

I can find the total size which I’ll need to use to scale the rectangles:

QuantityMagnitude[Total[Cases[fassoc, Quantity[_, _], Infinity]], "Bytes"] 

My idea is to recursively apply this total. In theory I could use this to do something like this with a tree graph and weight the vertices by size, but I want to convert this into a map of rectangles like in WinDirStat. While the colour is obvious – each level 1 association and all its descendants gets a RandomColor[] – I’m not sure how I should go about positioning the rectangles in a Graphics. Any ideas?

Data structure for efficient group lookup

I need a data structure, which allows efficient queries for ‘give me the group of x‘.

Let me give you an example:

Group 1: [a, b, c]  Group 2: [d, e] Group 3: [f]  getGroupOf(d) -> [d, e] 

There are no significant constraints on storage or construction time. I only need getGroupOf to be O(logn) or faster.

I am thinking about using a Dictionary<Element, Set<Element>> where entries for all elements in a group share the same set reference. This would make lookup effectively O(1) or O(logn) depending on the dictionary implementation, but would result in a lot of entries.

This feels fairly bloated, and I am wondering: is there is a more elegant data structure to accomplish this?

When would you use an edge list graph data structure instead of an adjacency list or adjacency matrix?

In what applications would you choose an edge list over an adjacency list or an adjacency matrix?

Sample Question, VisuAlgo: Which best graph DS(es) should you use to store a simple undirected graph with 200 vertices, 19900 edges, and the edges need to be sorted? Suppose your computer only has enough memory to store 40000 entries.

There are three choices: adjacency lists, adjacency matrix, and an edge list.

Edge lists are the correct answer here because sorting by weight is most efficient, but what are some other use cases?

Thanks!

Is there a way to store an arbitrarily big BigInt in a bit sequence, only later to convert it into a standard BigInt structure?

I am trying to imagine a way of encoding a BigInt into a bit stream, so that it is literally just a sequence of bits. Then upon decoding this bit stream, you would generate the standard BigInt sort of data structure (array of small integers with a sign). How could you encode the BigInt as a sequence of bits, and how would you decode it? I don’t see how to properly perform the bitwise manipulations or how to encode an arbitrary number in bits larger than 32 or 64. If a language is required then I would be doing this in JavaScript.

For instance, this takes bytes and converts it into a single bit stream:

function arrayOfBytesTo32Int(map) {   return map[0] << 24     | map[1] << 16     | map[2] << 8     | map[3] } 

How would you do that same sort of thing for arbitrarily long bit sequences?

Bloom filter like data structure supporting iteration

I am wondering whether there exists a data structure similiar to a bloom filter in the sense that it is an approximate finite set representation (allows for false positives, but no false negatives) and supports:

  • constant time complexity union of two sets
  • constant time complexity insertion of an element

but also allows for efficient iteration over the approximate elements in the set, i.e. iterate over all elements that are either actually in the set or false positives with a time complexity that is linear in the number of elements in the set

Efficiently storing and modifying a reorderable data structure in a database

I’m trying to create a list curation web app. One thing that’s important to me is being able to drag-and-drop reorder items in the list easily. At first I thought I could just store the order of each item, but with the reordering requirements, that would mean renumbering everything with a higher order (down the list) than the place where you removed or where you inserted the moved item. So I started looking at data structures that were friendly to reordering, or to both deletion and insertion. I’m looking at binary trees, probably red-black trees or something like that. I feel like I could with great effort probably implement the algorithms for manipulating those.

So here’s my actual question. All the tree tutorials assume you’re creating these trees in memory, usually through instantiating a simple Node class or whatever. But I’m going to be using a database to persist these structures, right? So one issue is how do I represent this data, which is kind of a broad question, sure. I would like to not have to read the whole tree from the database to update the order of one item in the list. But then again if I have to modify a lot of nodes that are stored as separate documents, that’s going to be expensive too, even if I have an index on the relevant fields, right? Like, every time I need to manipulate a node I need to actually find it in the database first, I think.

Also, I obviously need the content and order of each node on the front end in order to display the list, but do I need the client to know the whole tree structure in order to be able to send updates to the server, or can I just send the item id and its new position in the list or something?

I’m sorry if this veers too practical but I thought someone might want to take a crack at it.

What is the suitable file structure for database if queries are select (relational algebra) operations only?

Searches related to A relation R(A, B, C, D) has to be accessed under the query σB=10(R). Out of the following possible file structures, which one should be chosen and why? i. R is a heap file. ii. R has a clustered hash index on B. iii. R has an unclustered B+ tree index on (A, B).

Do non-PostgreSQL database softwares use roughly the same “structure” for communicating with them?

Basically, I have developed a PostgreSQL-based application which "in theory" could have its database software swapped out, but probably would cause a million headaches if I actually attempted to. I’m trying to determine if the other SQL database softwares (I frankly don’t care about non-SQL ones in the least, because they seem too different for me to bother with them in this life) have the following concepts:

  1. "hostname"
  2. "port"
  3. "username"
  4. "password"
  5. "handle database" (such as "postgres", which must be used to connect when there is no other database or when certain operations are to be done to the actual database)
  6. "database name"

I guess I’m fairy sure already about all the points except for the 5th. The concept of a "handle database" seems like it might be PG-only. If such is the case, I’m not sure how I should handle that, but I’m awaiting your answers before I make a decision.

I have a good mind to just forget about ever supporting other database softwares, but the way my system is structured basically forces me to at least try to "genericize" the communication with the database with functions called "database_" rather than "PostgreSQL_". (Even when the queries sent to these functions would only work on PG…)