Could Someone Give me Advice for Optimizing Performance in Large Scale Applications on Velneo?

Hello there, :wave:

I have been working on a fairly large scale application using Velneo; and as the project grows; I have started to notice some performance issues; particularly when handling larger datasets and more complex processes. I have already gone through the documentation and applied some of the standard performance optimization techniques but I am still encountering slowdowns, especially during peak usage times.

Are there any tips for optimizing data management when working with huge tables? I have heard about breaking down tables; but I am not sure how to implement this effectively in Velneo.

I am running into performance bottlenecks with background tasks and certain processes that involve complex calculations. What strategies have you used to optimize long running or resource intensive processes? :thinking:

Does anyone have experience with efficiently managing memory usage or leveraging caching techniques to improve speed and reduce lag? :thinking:

Also; I have gone through this post; https://foros.velneo.com/t/no-entiendo-mucho-los-layouts-del-objeto-informe-ccsp/ which definitely helped me out a lot.

If you have worked on a similar project; could you share some of the lessons you’ve learned or adjustments that made a significant difference?

Thanks in advance for your help and assistance. :innocent:

Hola chrismartin

Perdona que te conteste en castellano, pero mi inglés es bastante flojo.
Lo que pides no tiene una respuesta simple, y hay que analizar cada tabla o cada pantalla para ver los diferentes motivos de esa lentitud.

Tengo visto procesos que usan un filtrar lista sobre 100.000 registros y tardan 30 segundos, y eso es instantáneo si lo haces con indices, pero esto es solo un ejemplo.

Rejillas con enlaces indirectos son otra fuente de problemas.

Pon casos mas concretos y tal vez te podamos ayudar.

Otra cosa que también se usa para optimizar procesos muy largos es trocear los procesos. Se tarda menos en hacer 1000 repeticiones de 100 elementos que una repetición de 100.000, por el tamaño de la transacción

What you’re asking, Chris, requires more than just simple advice to make a process more efficient. In this case, when discussing a large-scale application, it first requires an analysis to identify where the bottleneck is.