How to optimise this IN query?

Lets say i have this schema Resource(resource_mapping_id:Uuid, reaource_id: uuid, node_id: varchar, date: date, available: boolean, resource_start: varchar, resource_end: varchar)

So i have the composite key formed on (resource_mapping_id, resource_id, node_id, date, resource_start, resource_end)

Note: the node is is also uuid stored as text. Now I have these 2 queries:

update resource set available = :value where resource_id=:somevalue and date=:somedate and resource_start=:sometime and resource_end=:sometime

And

select * from resource where resource_id In (:resourceidlist) and date in (:dates) and node_id in (:nodeIds)

This table contains huge number of records, you can say around 500 million or so..

So whenever i hit these queries bia my Java application through jpa, they made the cpu utilisation of the database spiked upto 100%.

So after doing analysis, I created an index say

Index(resource_id, node_id, date)

Which in turn fixed the issue with the update query, even when it runs in parallel threads, the cpu never spiked up even a bit.

But now coming to the select statement, i was having issues when the parameters went high. So i batched them, i mean in a batch x no. of node ids, resource ids and dates can be processed, even though, with 100(note, size of all parameters is same, if i tel 100, all total its 300) parameters, it spiked up the cpu and ther other threads go into a waiting state!

How to resolve this issue? Should I change my query or something? Ot should I make any other change or create further index only for this situation based? Please help me.

I am using postgres v13.