The response from api is getting too long
I am using autogen as chatbot and calling the github api GITHUB_GET_A_PULL_REQUEST but the response form this api is too long to process though gpt api and throws the error
Error code: 400 - {'error': {'message': "Invalid 'messages[6].content': string too long. Expected a string with maximum length 1048576, but got a string with length 1077750 instead.", 'type': 'invalid_request_error', 'param': 'messages[6].content', 'code': 'string_above_max_length'}}
now Is there a way to use chucking in this. I saw the autogen docs but didn't find a methord there.
So is there a way so that the responce of the composio api could be chucked before sending it to autogen
5 Replies
unwilling-turquoise•2mo ago
You could use a post-processor to modify/filter/chunk the output of the Composio tool call:
https://docs.composio.dev/patterns/tools/use-tools/processing-actions
Composio Docs
🛠️ How to modify Actions? - Composio
Composio enables your agents to connect with Various Tools and work with them
generous-apricotOP•2mo ago
I need the full git pull request responce. so I now i am thinking of calling the api GITHUB_GET_A_PULL_REQUEST without llm and break the response into parts which a llm would be able to process
Hey, you can do that using postprocessor. Basically the way it works is before the response is recieved by llm, the post processor will be called.
postprocessor is like a python function you can write to modify the response. So you can make sure responses are minified in the way you want for your agent to consume.
generous-apricotOP•2mo ago
But the response that i get from GITHUB_GET_A_PULL_REQUEST is too big. like if there are a lot of file changes and i want to process them all. So to work around it I am calling the GITHUB_GET_A_PULL_REQUEST api directly using ComposioToolSet().execute_action and then chuncking the response and then sending it to llm one by one
Got it. That would also work.