My approach to AI with Cursor

Language: English
How things have changed since the last time on my use of AI

This is a continuation to the “My approach to AI” post.

Some stuff have changed since the last time I talked about my AI usage, but some stuff have remained the same. My stance about morals and ethics on the use of AI, e.g. “AI Art”, is still the same; I don’t like how generative AI is used, and nowadays I’m even more against it because of the increase of RAM and SSD prices it has caused.

I was just about to start building a PC for my Linux usage :c

Some recap

I started using AI more on my work and my day-to-day life, testing some AI Agents, using web UIs. I did try to use Cursor and other alternatives for vibe-coding in agentic ways, but they didn’t seem to useful to me. I always came back to use AI as a question-answer framework, where I could just get the answers I needed to do it myself.

Things have changed

Around a month after that post, I decided to use OpenCode instead of AIChat. It’s just a better experience for a terminal based environment. Though, that didn’t last that much.

Some big stuff happened in these 5-6 months. The company I work at decided to give us all a Cursor shared subscription, so I was also in that plan no matter if I wanted it or not, “not that I’m gonna use it anyways” I thought. But things happen.

On the first week of October I was on the brink of not finishing a feature in time…

A story

It was a Sunday, with a lot of stuff to do, when I promised last week to have it done by Monday. I knew what to do, my planning was flawless, so what was missing is the code, code I didn’t have the time to write and think about by myself. So, in desperate times I decided to trust an Agent mode for once.

Installed Cursor at around 20 o’clock, started providing the whole context to the AI and to my surprise it came up with a good base of what I was think on doing. For the first time I thought the code an AI produced was good enough that I could’ve not written it myself; it was that good. Even though I could’ve done it myself, the AI was 100 times faster than me.

By 22 o’clock, the code base was mostly perfect. I just needed to test some stuff, add more edge-cases, fight a bit with the AI, fix some stuff myself and finally at around 5:00 in the next day (yes, I didn’t sleep) I was satisfied with the results.

Out of the 200 available free requests for a month in the Cursor subscription, I already used 110 in just a single night.

However, I was still missing some edge-cases, further testing and some stuff was still a bit wrong, so I ended-up finishing the feature by the next day. Yet, it was a surprisingly good experience and since that day I was converted to Cursor.

Databases and SQL

I consider myself a good coder, I think I could be able to solve almost any problem (not algorithm related) by coding it myself. Yet, databases and SQL are not my specialty. So, a lot of the stuff I created had very bad performance due to the strict use of ActiveRecord only queries, not a single raw SQL in sight. Luckily, my conversion to Cursor already happened so I decided to give it a try with SQL.

To keep it short: Performance was greatly improved. Requests that took around 5s now took a few 100ms. How did I do it? It was actually quite simple, I just copied the SQL query the rails code was generating and using, and gave it to the AI alongside the full EXPLAIN VERBOSE result of the same SQL query with the average execution time. Most of the time the AI gave me 2-3 possible results, so I just tested them manually and validated both the results and the performance. With all that I was able to optimize a lot of requests.

Local LLMs

Recently I bought a new laptop, an M4 Pro (14c and 20c GPU) 14” MacBook Pro with 48GB of RAM, so I decided to try doing some local LLM stuff with LM Studio. I did try with some small and medium models and it worked very well, yet I don’t find them that useful besides security and “owning” the LLM myself. I didn’t want to have a 10-30GB model in my RAM doing nothing most of the time.

Perhaps I could start using local LLMs more with my own cluster. I’ve been thinking on building one myself, but the recent price increases caused by the AI itself… well, that plan will stay on hold until 2030.

Conclusions

I don’t have much more to say. I still don’t like AI, and will still end up writing code myself, but I’m also more open to AI Agents and they’re definitely useful for making quick code, feature planning and some question-answer prompts. Just these past days I’ve been using Cursor for creating some planning about a feature and making some quick changes, yet I’m still changing some stuff and adding some logic myself.

I’m not fully converted to vibe coding, and I don’t think I will, but given my premise on my last post… perhaps I end up advocating for AI rights in the future (hopefully not).