Skip to content

Todo List🔗

Namespace iox

  • this might be needed to be public when the logger is used in templates this might be needed to be public when the logger is used in templates

Member iox::MAX_CLIENTS_PER_SERVER

Member iox::MAX_RECEIVERS_PER_SENDERPORT

  • remove MAX_RECEIVERS_PER_SENDERPORT when the new port building blocks are used

Member iox::MAX_REQUESTS_ALLOCATED_SIMULTANEOUSLY

Member iox::mepoo::ChunkManagement::m_mempool

  • optimization: check if this can be replaced by an offset relative to the this pointer

Member iox::mepoo::SharedChunk::operator== (const void *const rhs) const noexcept

  • use the newtype pattern to avoid the void pointer

Class iox::popo::ChunkDistributor< ChunkDistributorDataType >

  • There are currently some challenge: For the stored queues and the history, containers are used which are not thread safe. Therefore we use an inter-process mutex. But this can lead to deadlocks if a user process gets terminated while one of its threads is in the ChunkDistributor and holds a lock. An easier setup would be if changing the queues by a middleware thread and sending chunks by the user process would not interleave. I.e. there is no concurrent access to the containers. Then a memory synchronization would be sufficient. The cleanup() call is the biggest challenge. This is used to free chunks that are still held by a not properly terminated user application. Even if access from middleware and user threads do not overlap, the history container to cleanup could be in an inconsistent state as the application was hard terminated while changing it. We would need a container like the UsedChunkList to have one that is robust against such inconsistencies.... A perfect job for our future selves

Member iox::popo::ChunkDistributor< ChunkDistributorDataType >::cleanup () noexcept

  • currently we have a deadlock / mutex destroy vulnerability if the ThreadSafePolicy is used and a sending application dies when having the lock for sending. If the RouDi daemon wants to cleanup or does discovery changes we have a deadlock or an exception when destroying the mutex As long as we don't have a multi-threaded lock-free ChunkDistributor or another concept we die here

Member iox::popo::ChunkDistributorData< ChunkDistributorDataProperties, LockingPolicy, ChunkQueuePusherType >::HistoryContainer_t

  • If we would make the ChunkDistributor lock-free, can we than extend the UsedChunkList to be like a ring buffer and use this for the history? This would be needed to be able to safely cleanup. Using ShmSafeUnmanagedChunk since RouDi must access this list to cleanup the chunks in case of an application crash.

Member iox::popo::UsedChunkList< Capacity >::insert (mepoo::SharedChunk chunk) noexcept

  • can we do this cheaper with a global fence in cleanup?

Member iox::popo::UsedChunkList< Capacity >::remove (const mepoo::ChunkHeader *chunkHeader, mepoo::SharedChunk &chunk) noexcept

  • can we do this cheaper with a global fence in cleanup?

Namespace iox::roudi

  • Move everything in this namespace to iceoryx_roudi_types.hpp once we move RouDi to a separate CMake target

Member iox::roudi::PortManager::stopPortIntrospection () noexcept

  • Remove this later

Member iox::roudi::PortPool::getPublisherPortDataList () noexcept

  • don't create the vector with each call but only when the data really change there could be a member "cxx::vector<popo::PublisherPortData* m_publisherPorts;" and publisherPorts() would just update this member if the publisher ports actually changed

Member iox::roudi::PortPoolMemoryBlock::PortPoolMemoryBlock () noexcept=default

  • the PortPool needs to be refactored to use a typed MemPool once that is done, the cTor needs a configuration similar to MemPoolCollectionMemoryProvider

Updated on 17 June 2021 at 11:15:27 CEST