You have several options for how to implement an API and its operations.
First, you choose if you want a single implementation for all environment, or if you want to specify one implementation per environment.
You can also choose to copy an implementation from one environment to another - allowing you to easily change a few settings that differ between environments.
By default, the implementation for your API looks like above - this is a generic implementation that is used unless you override it for a particular operation - it allows you to perform the same action for all operations within the API - often, you will want just to proxy the request onwards to backend applications where the API is implemented and not do anything specifically for the API.
But you might also want to mock particular operations, or ( e.g. in a Sandbox environment ) return fake test data to your clients instead of allowing access to real data.
The default action for all operations that are not overridden looks like this:
You can recognize it by not having a path, and having a * instead of the HTTP Method.
Scripts / Mocks
You can use a script to generate a response to your clients - this is ideal for creating mocks or testing stubs for APIs.
If you check one of the override checkboxes for an Operation, you will get a script for this particular operation.
There are a few things to note here:
First, the default response is created pre-filled with data that corresponds to the OpenAPI definition - so if you e.g override the operation "GET /pet/findByStatus" in the Swagger Petstore, Ceptor will generate a default response like this:
This allows you to easily create a mock response that corresponds exactly to the OpenAPI definition.
Ceptor creates comments with any relevant information, like enumeration values or size restrictions expressed within the schema.
You can click on the button "Create sample response from specification" to create a new sample response in case you have updated the OpenAPI definition schema.
Note that the above is true for OpenAPI style services - for SOAP services, the implementation is slightly different by default, and you cannot override implementations for individual operations.
Also, note the infobox on the right:
It lists any parameters that you have defined - when the script is called, the parameter name will be prefixed with "p" and the first letter after that uppercased - this variable will then be prefilled with the value of the parameter in the call from the client.
This allows you to create a script like this:
... assuming the operation has an input parameter called "name".
Proxying allows you to proxy the request toward a destination server, and stream its response back to the client.
Here, you can either use a predefined destination for your selected environment, or you can use a custom one you specify yourself.
If you selected one implementation for all environments, the option to use a predefined destination is not available.
If you specify a custom destination, you have all the same options available as any destination within the gateway.
Please refer to Config - Destinations for full details on all available options.
You can use Request Modification to easily remove unwanted credentials from the request which you do not wish to forward to the servers you proxy APIs to - or you can use it to add your own custom headers and credentials, that are hidden from your API Partner applications, but that you still need to call the server on behalf of your API Partners.
Pipelines and Tasks
If scripts or proxying is not enough for you, you can use Ceptor Gateway's Pipelines and Tasks for your API implementation.
Here, you can drag task into a pipeline, allowing you to call multiple different services and combine their response in one, execute scripts, create branches etc. etc.
Please see Pipelines and Tasks for full details about all the tasks available.
When you call remote URLs / Services from within a task, all request handling is asynchronous so you do not block any threads while making the call like you would do if the call was made "manually" from a script - so this scales far better for thousands of concurrent requests than calling remote servers from a script does.