I had the hardest time getting Qt Widgets to be usable in the context of a Qt Vulkan Window, but I believe I’ve found a general method for using Qt Widgets on top of native drawing APIs.

The simplified version is:

  • Wrap your native drawing in a widget container (createWidgetContainer)
  • Create a central widget to attach child widgets to
  • Set the central widget as a child of your native drawing container
  • Add a QImage field to the central widget
  • Override paintEngine () to return the QImage field’s paintEngine ()
  • Upload the QImage to native rendering each frame as necessary

This of course has some overhead of copying image buffers around, especially to the GPU, as well as putting the updated texture in the correct form if you’re using Vulkan. But given the flexibility and power built-in to Qt Widgets, I definitely see the overhead being worth it.

I’ll give a little bit of sample code from my current project using Vulkan to demonstrate.

Central Widget

Let’s start with the central widget. I’m assuming you already have a robust renderer setup doing all the magic drawing you want in whatever way you please and now you want to put (alpha-enabled) widgets on top. If you’re just doing a viewport with widget support on the side, then the rest of this post won’t help you.

In your constructor make sure to include WA_NoSystemBackground and WA_TranslucentBackground. This should tell Qt not to expect to fill the widget background when drawing, and ensure that it uses a alpha channel when drawing as well. You’ll also want to call setAutoFillBackground (false) to keep Qt from filling the background with during draw with system colors.

centralWidget_t::centralWidget_t (QVulkanWindow *vulkan_) : m_vulkan (vulkan_)
	setAttribute (Qt::WA_NoSystemBackground);
	setAttribute (Qt::WA_TranslucentBackground);
	setAutoFillBackground (false);

	m_d->video->installEventFilter (this);

I added an event filter to track my Vulkan window’s resize event. This allows me to keep the central widget positioned at the window extents:

bool centralWidget_t::eventFilter (QObject *const watched_, QEvent *const event_)
	Q_UNUSED (watched_);

	if (!event_)
		return false;

	QSize newSize;

	switch (event_->type ())
	case QEvent::Type::Resize:
		newSize = dynamic_cast<QResizeEvent *> (event_)->size ();
		return false;

	resize (newSize);
	move (0, 0);

	return false;

You’ll also want to override the paintEngine () method of the central widget. This is necessary to keep Qt from trying to draw everything to framebuffer but still allow widget interactions. In this method, m_image is a QImage.

QPaintEngine *centralWidget_t::paintEngine () const
	return m_image.paintEngine ();

Finally, we need to get the frame uploaded and drawn. I’ll leave the transition, copy, and pipeline handling mechanisms as an excercise to the reader; if you’ve gotten this far these should be relatively simple.

void centralWidget_t::prepareDraw (VkCommandBuffer cmd_)
	auto const deviceFn = m_vulkan->deviceFn ();
	auto const device   = m_vulkan->device ();
	auto const frameId  = m_vulkan->currentFrame ();

	if (!m_img[frameId])

	m_frameDirty = clamp (m_frameDirty - 1, 0, m_video->concurrentFrameCount ());

	if (m_frameDirty <= 0)

	m_frame.fill (Qt::transparent);
	QPainter painter (&m_frame);
	render (&painter);

	unsigned width  = size ().width ();
	unsigned height = size ().height ();

	// data hasn't been mapped before
	if (!m_memMap[frameId])
		deviceFn->vkMapMemory (
		    device, m_txMem[frameId], 0, imageSize (size ()), 0, &m_memMap[frameId]);

	memcpy (m_memMap[frameId], m_frame.bits (), m_frame.sizeInBytes ());

	VkBufferImageCopy region{};
	region.bufferOffset                    = 0;
	region.bufferRowLength                 = 0;
	region.bufferImageHeight               = 0;
	region.imageSubresource.aspectMask     = VK_IMAGE_ASPECT_COLOR_BIT;
	region.imageSubresource.mipLevel       = 0;
	region.imageSubresource.baseArrayLayer = 0;
	region.imageSubresource.layerCount     = 1;
	region.imageOffset                     = {0, 0, 0};
	region.imageExtent                     = {width, height, 1};

	// see https://vulkan-tutorial.com for reference functions
	transitionImageLayout (

	deviceFn->vkCmdCopyBufferToImage (

	transitionImageLayout (

void centralWidget_t::draw (VkCommandBuffer const cmd_)
	auto const frameId = m_vulkan->currentFrame ();

	if (!m_img[frameId]) // this is a VkImage

	// if you're using QVulkanRenderer defaults, this method should record and execute a 
	// secondary command buffer using the default renderpass settings
	drawImage (cmd_, m_view[frameId], m_sampler[frameId]);

I know I haven’t shown you the class members or all the initialization. I’m going to justify this by saying you shouldn’t need to see them if you’ve studied the Vulkan API and vulkan-tutorial, and I don’t want you just copy-pasting this. I’ll eventually release this code under GPL anyway.


Now we need to ship commands off to the gpu. If you’re using QVulkanRenderer and QVulkanWindow defaults you’re probably already using the default renderpass which contains one subpass. If you’re not, have fun, you’re probably already ahead of me in terms of capabilities. In general your process should be:

  • Start your subpass draw commands
  • Prepare your widget texture transitions
  • Run your primary subpass secondary command buffers
  • Run your widget texture draw command buffer

That will look probably a little like this:

void startNextFrame () override
	auto const device      = m_video->device ();
	auto const deviceFn    = m_video->deviceFn ();
	m_frameId              = m_video->currentFrame ();
	auto const cmd         = m_video->currentCommandBuffer ();

	// start threaded drawing
	m_waitFrame[m_frameId] = false;
	emit m_video->draw ();

	// render widget while we wait (note this adds pipeline barriers on the raster step)
	m_video->centralWidget ()->prepareDraw (cmd);

	while (!m_waitFrame[m_frameId])
		std::this_thread::yield ();

	// begin drawing
	VkClearColorValue        clearColor = ;
	VkClearDepthStencilValue clearDS    = {2.0f, 0};
	VkClearValue             clearValues[3];
	memset (clearValues, 0, sizeof (clearValues));
	clearValues[0].color        = clearColor;
	clearValues[1].depthStencil = clearDS;
	clearValues[2].color        = clearColor;

	QSize const    sz     = m_video->swapChainImageSize ();
	unsigned const width  = sz.width ();
	unsigned const height = sz.height ();

	VkRenderPassBeginInfo rpBeginInfo{};
	memset (&rpBeginInfo, 0, sizeof (rpBeginInfo));
	rpBeginInfo.sType             = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO;
	rpBeginInfo.renderPass        = m_video->defaultRenderPass ();
	rpBeginInfo.framebuffer       = m_video->currentFramebuffer ();
	rpBeginInfo.renderArea.offset = {0, 0};
	rpBeginInfo.renderArea.extent = {width, height};
	rpBeginInfo.clearValueCount = 3;
	rpBeginInfo.pClearValues    = clearValues;

	deviceFn->vkCmdBeginRenderPass (
	// submit our command buffers
	if (m_commandBufferList.size () > 0)
		deviceFn->vkCmdExecuteCommands (
		    static_cast<uint32_t> (m_commandBufferList.size ()),
		    m_commandBufferList.data ());

	// draw the widget onto a view-aligned quad after everything else
	m_video->centralWidget ()->draw (cmd);

	deviceFn->vkCmdEndRenderPass (cmd);

	m_video->frameReady ();
	m_video->requestUpdate ();

Main Window

Somehwere in your main, you’ll need to capture your QVulkanWindow into a widget container using QWidget::createWindowContainer ().

Here’s the basic process I followed:

int main (int argc, char *argv[])
	QApplication app (argc, argv);

	auto const myVulkan = new myQVulkanWindow ();
	auto const videoWidget = QWidget::createWindowContainer (myVulkan);

	auto const centralWidget = new centralWidget_t ();
	centralWidget->setParent (videoWidget);

	centralWidget->setLayout (new QVBoxLayout ());
	centralWidget->layout ()->addWidget (new mainMenu_t{});

	videoWidget->resize (640, 480);
	videoWidget->show ();

	return app.exec ();

And here we go!