If your database is that large and you have that little space, you may have limited options:
1 - work with your DBA to create a query that will purge the logs in small chunks (about 2000 rows). If possible in your DBMS, you may want to write the query so the changes are not sent to the transaction log, since this will take space on the server. If you don't, you may run out of space on your server as the transaction log fills up.
2 - You may also try using the wizard to remove small chunks of data. For example, you might try deleting any rows that are more than 365 days old, then 300, then 240, etc. If you get more than 50000 rows, reduce the size of your query.
3 - If you don't care about the log records, or it has reached a critical state, you may want to consider just truncating the table. In many (if not all) DBMSs, this will delete all rows in the table without recording the changes in the transaction log. This is about your quickest solution; HOWEVER, THIS WILL REMOVE ALL LOG RECORDS WITHOUT THE OPTION TO RESTORE (except from a server backup).
Regardless of the path you take, engage the DBA into your solution. They may be able to assist you in the creation of the scripts, and/or have suggestions on how to approach the problem both short-term and long-term.
Once you get the logs cleared, you may also want to evaluate what is written to the logDB. How many applications send Interaction and Trace level records to the network? Do you need Interaction and Trace level records sent to the network?