Create custom workflow on distributed processes#

This example shows how distributed files can be read and post processed on distributed processes. After remote post processing, results are merged on the local process. In this example, different operator sequences are directly created on different servers. These operators are then connected together without having to care that they are on remote processes.


Import dpf module and its examples files

from ansys.dpf import core as dpf
from ansys.dpf.core import examples
from ansys.dpf.core import operators as ops

Configure the servers#

To make this example easier, we will start local servers here, but we could get connected to any existing servers on the network.

remote_servers = [dpf.start_local_server(as_global=False), dpf.start_local_server(as_global=False)]

Here we show how we could send files in temporary directory if we were not in shared memory

files = examples.download_distributed_files()
server_file_paths = [dpf.upload_file_in_tmp_folder(files[0], server=remote_servers[0]),
                     dpf.upload_file_in_tmp_folder(files[1], server=remote_servers[1])]

First operator chain.

remote_operators = []

stress1 = ops.result.stress(server=remote_servers[0])
ds = dpf.DataSources(server_file_paths[0], server=remote_servers[0])

Second operator chain.

stress2 = ops.result.stress(server=remote_servers[1])
mul = stress2 * 2.0
ds = dpf.DataSources(server_file_paths[1], server=remote_servers[1])

Local merge operator.

merge = ops.utility.merge_fields_containers()

Connect the operator chains together and get the output#

nodal = ops.averaging.to_nodal_fc(merge)

merge.connect(0, remote_operators[0], 0)
merge.connect(1, remote_operators[1], 0)

fc = nodal.get_output(0, dpf.types.fields_container)
01 distributed workflows on remote


DPF  Field
  Location: Nodal
  Unit: Pa
  432 entities
  Data:6 components and 432 elementary data

Total running time of the script: ( 0 minutes 5.209 seconds)

Gallery generated by Sphinx-Gallery