Just delete 50.000.000 of rows on a PostgreSQL table and DB still very slow

sorry for my poor english.

I have a postgres DB runing on amazon RDS (db.t3.small), with django as a backend. i have made a mistake and created 50.000.000 rows. when i figure out (because queries on that table where ultra slow) i delete it all. but the queries i make on that table stills super slow. it only have 300 rows now.

i have to clean some cache? i have to wait something? the configuration of the RDS in aws is default.

the engine version of postgres is 12.5, also have postgis installed in it.

i check for vacuum issues and run this command:

SELECT relname AS TableName,n_live_tup AS LiveTuples,n_dead_tup AS DeadTuples,last_autovacuum AS Autovacuum,last_autoanalyze AS Autoanalyze FROM pg_stat_user_tables; 

the table with the problem says:

'appointment_timeslot', 1417, 0, datetime.datetime(2021, 7, 21, 18, 13, 8, 193967, tzinfo=<UTC>), datetime.datetime(2021, 7, 21, 18, 13, 30, 551750, tzinfo=<UTC>) 

check for indexes that Django creates automaticly on that table and i find 4

[ ('appointment_timeslot_pkey', 'CREATE UNIQUE INDEX appointment_timeslot_pkey ON public.appointment_timeslot USING btree (id)') 'appointment_timeslot_home_visit_id_62df4faf', 'CREATE INDEX appointment_timeslot_home_visit_id_62df4faf ON public.appointment_timeslot USING btree (home_visit_id)') ('appointment_timeslot_office_id_6871b47b', 'CREATE INDEX appointment_timeslot_office_id_6871b47b ON public.appointment_timeslot USING btree (office_id)') ('appointment_timeslot_time_range_id_715578fa', 'CREATE INDEX appointment_timeslot_time_range_id_715578fa ON public.appointment_timeslot USING btree (time_range_id)') ]